text
stringlengths
0
6.23M
quality_score_v1
float64
0
1
- Locations and Hours - User Fees - Operations and Services - Recycling Information - Technical Information - Hazardous Waste Collection - FAQs - Myths and Facts 2016 - Strategic Plan - State Solid Waste Info FAQs I have a broken refrigerator, where can I dispose of it? Refrigerators, air-conditioners, and similar cooling devices must be taken to the landfill at 3212 Dover Road. Hazardous gases, (freon, etc.) will be removed free of charge and the device will be recycled. Can tree limbs be taken to my convenience station? Our convenience stations don't have the capability to accept tree limbs and brush. Limbs and cuttings my be taken to the transfer station on Highway Drive provided they do not exceed 4 feet in length and 3 inches in diameter. If possible, bring limbs, brush, etc. to the Landfill to be ground into mulch. Do you take automotive batteries and used motor oil? Automotive wet cell batteries may be taken to the appropriate container/area at a convienience station or the transfer station. Please see the attendant for disposal instructions. Used motor oil can be taken to any convienience center.
0.721863
CVE-2020-2125 (debian_package_builder) Latest High Severity CVE's Jenkins Debian Package Builder Plugin 1.6.11 and earlier stores a GPG passphrase unencrypted in its global configuration file on the Jenkins master where it can be viewed by users with access to the master file system. SourceFebruary 12, 2020/by govanguard 0 0 govanguard govanguard2020-02-12 10:15:002020-02-12 10:15:00CVE-2020-2125 (debian_package_builder)
0.000896
TITLE: Test for an Inverse Function QUESTION [1 upvotes]: When I teach students about inverse functions, every textbook says that you have to test both $f(g(x))$ and $g(f(x))$ to see if they equal $x$. However, I've never seen a case where one of those equals $x$ and the other doesn't which makes me question if you really have to look at both cases. So my question is, are there two functions, $f$ and $g$, such that $f(g(x)) = x$ but $g(f(x)) \ne x$? REPLY [0 votes]: For a function $g$, it has a left inverse if you can find $f_1$ such that $f_1(g(x))=x$ and a right inverse if you can find a function $f_2$ such that $g(f_2(x))=x$; $g$ has an inverse $f$ if both $f_1$ and $f_2$ exist and $f=f_1=f_2$. Others above have given examples of functions which only have a one-sided inverse but let me give a perhaps more familiar one. Consider as $g$ the differentiation map $\frac{d}{dx}$ on polynomials (i.e. expressions of the form $p(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0$ where $a_i$ are rational numbers. As a possible inverse, consider $f$ to be the indefinite integral map $\int$. Then $g(f(p(x)))= \frac{d}{dx}(\int p(x) dx)=p(x)$ (where $p(x)$ is an element of the polynomial space we are acting on), so $f$ is a right inverse to $g$. However $f(g(p(x)))=\int(\frac{d}{dx} p(x)) dx$ is not always $p(x)$ due to the constant of integration; i.e. both $p_1(x)=x$ and $p_2(x)=x+1$ differentiate to the same element $1$, but when integrated we don't know which one it came from, due to the constant of integration. (Caveat: here the integration isn't well-defined as I've not created an explicit map on polynomials but this could be done by setting the constant of integration to always be $0$).
0.23213
There are many months of the year that are dedicated to some sort of cause or awareness, some of which are great, some of which you’re thinking “okay, we need a specific month to think about this?” Some, like September and its Suicide Prevention Awareness Month, make me stop and think. We have this month, to think about it, but so what? I mean, its great that we’re thinking about it, but what are we actually DOING about it? I think we are just now starting to really pay attention to this, things are just starting to move, and people are admitting we have a problem. Resources are being put out there, by Veterans and civilians alike, because I think the realization that the numbers are becoming staggering is coming into play. Too many military members are committing suicide because its their only way out, and their only way to deal with things. And what about all the people who are left behind, and the trauma they have now experienced by a friend or family member, or just a fellow soldier in their unit ending their life? The statistics are there, we see them in every article. In the 2011 DoDSER (DoD Suicide Event Report) it cited the Armed Forces Medical Examiner System, and its report that stated 301 service members died by suicide, and a staggering 915 attempted it. Programs are created, talks are given, and even last year the Army had an entire ‘Stand Down’ day, where no one (except essential personnel) worked. Instead they all attended training. The numbers keep on rising despite all this, and the military seems to be baffled. But what is the answer? I was talking to few friends, originally wanting to get their thoughts on where the disconnect might actually be. Is it fear, is it lack of training and resources for family members, is it lack of the right resources, or is it something else. As I began to think about it, in the five seconds it took me to ask the all, I wondered something. Maybe its the fact that there isn’t one right answer. Maybe that’s how we operate, or how the military operates. There is a solution to the problem, you apply it, its fixed. One of my friends was thinking the same thing I was. There isn’t one right answer, “there’s as many reasons for suicide as there are people,” one of the my friends said. You know, I think she’s right. Maybe its not the solution that we should be looking for. Maybe we should be looking at how we get the solutions (plural) to the people they need to go to. In a conversation with Major Ed Puldio (WarriorsForFreedom.org) and Dr. Dan Reidenberg (SAVE.org) the focus wasn’t on a specific program, but making sure those that needed help, go the help they needed. In 2004 Major Pulido was injured by an IED. He realized that despite what he wanted for himself, to go on with his military career despite his physical and emotional injuries, there were other things for him to do out there. He hopes that others won’t be scared to finally get that help either. Fear of losing your career isn’t worth the long term loss to yourself, your friends and your family. There seems to be a multi-solution for all of those involved in the conversation last week. Peer-to-Peer mentors, education in the workplace (for National Guard and Reserve), raising awareness for general medical providers, for law enforcement and giving tools to family members. One of the most unique of these programs was the Carson J. Spencer Foundation, a unique program that has set its sights on changing the face of mental illness and suicide. Their program, Working Minds, provides tools and training to enable business to battle suicide head on. Please read more about the Carson J. Spencer Foundation, a Colorado based organization, on their website. It’s organizations like these that are realizing, like I did, that there is no one way to “solve” the issue of suicide. There’s a magic pill that everyone can take a suddenly everything will be okay. How is your community, base and/or state tackling the issue of mental health and wellness? How would you help those dealing with thoughts of suicide? You can find a growing list of resources by clicking on the Military Mental Health tab at the top of my blog. See that I’m missing a resource, please let me know! Cammo Style Love
0.995145
TITLE: Is the image a universal object? QUESTION [2 upvotes]: Given a function $f:X\to Y$ in category $\mathcal{C}$, one can construct the image as a factorisation $f=(e:I\hookrightarrow Y)\circ(g:X\to I)$ that is universal (initial) among all such factorisations. This does seem like a universal property. But I can't figure out how this can actually be constructed as an initial object in a comma category, because there are morphisms both from and to the object. REPLY [3 votes]: Let $\newcommand\Sub{\operatorname{Sub}}\Sub(Y)$ the preorder of subobjects of $Y$ in $\mathcal C$. Then the "image functor" can be seen as a left-adjoint of the forgetful functor $\mathcal C/Y\leftarrow\Sub(Y)$. In particular, if $f=e\circ g$ is an image factorization of $f$, then $g:(X,f)\to(I,e)$ is a universal arrow respect to the forgetful functor $\mathcal C/Y\leftarrow\Sub(Y)$. If $\mathcal C/Y\leftarrow\Sub(Y):\Gamma$ denote such forgetful functor, then $((X,f),g)$ is an initial object in the comma category $(X,f)\downarrow\Gamma$.
0.004246
\begin{document} \title[Hardy inequalities for semigroups]{Hardy inequalities and non-explosion results for semigroups} \author[K{.} Bogdan]{Krzysztof Bogdan}\address{Department of Mathematics, Wroc{\l}aw University of Technology, Wybrze{\.z}e Wyspia{\'n}skiego 27, 50-370 Wroc{\l}aw, Poland} \email{bogdan@pwr.edu.pl} \author[B{.} Dyda]{Bart{\l}omiej Dyda} \address{Department of Mathematics, Wroc{\l}aw University of Technology, Wybrze{\.z}e Wyspia{\'n}skiego 27, 50-370 Wroc{\l}aw, Poland} \email{Bartlomiej.Dyda@pwr.edu.pl} \author[P{.} Kim]{Panki Kim} \address{Department of Mathematics, Seoul National University, Building 27, 1 Gwanak-ro, Gwanak-gu Seoul 151-747, Republic of Korea} \email{pkim@snu.ac.kr} \subjclass[msc2010]{Primary: {31C25, 35B25}; Secondary: {31C05, 35B44}} \keywords{optimal Hardy equality, transition density, Schr\"odinger perturbation} \thanks{Krzysztof Bogdan and Bart\l{}omiej Dyda were partially supported by NCN grant 2012/07/B/ST1/03356.} \begin{abstract} We prove non-explosion results for Schr\"odinger perturbations of symmetric transition densities and Hardy inequalities for their quadratic forms by using explicit supermedian functions of their semigroups. \end{abstract} \maketitle \section{Introduction}\label{sec:int} Hardy-type inequalities are important in harmonic analysis, potential theory, functional analysis, partial differential equations and probability. In PDEs they are used to obtain a priori estimates, existence and regularity results \cite{MR2777530} and to study qualitative properties and asymptotic behaviour of solutions \cite{MR1760280}. In functional and harmonic analysis they yield embedding theorems and interpolation theorems, e.g. Gagliardo--Nirenberg interpolation inequalities \cite{MR2852869}. The connection of Hardy-type inequalities to the theory of superharmonic functions in analytic and probabilistic potential theory was studied, e.g., in \cite{MR856511}, \cite{MR1786080}, \cite{MR2663757}, \cite{Dyda12}. A general rule stemming from the work of P.~Fitzsimmons \cite{MR1786080} may be summarized as follows: if $\mathcal{L}$ is the generator of a symmetric Dirichlet form $\EEE$ and $h$ is superharmonic, i.e. $h\ge 0$ and $\mathcal{L}h\le 0$, then $\EEE(u,u)\ge \int u^2 (-\mathcal{L}h/h)$. The present paper gives an analogous connection in the setting of symmetric transition densities. When these are integrated against increasing weights in time and arbitrary weights in space, we obtain suitable (supermedian) functions $h$. The resulting analogues $q$ of the Fitzsimmons' ratio $-\mathcal{L}h/h$ yield explicit Hardy inequalities which in many cases are optimal. The approach is very general and the resulting Hardy inequality is automatically valid on the whole of the ambient $L^2$ space. We also prove non-explosion results for Schr\"odinger perturbations of the original transition densities by the ratio $q$, namely we verify that $h$ is supermedian, in particular integrable with respect to the perturbation. For instance we recover the famous non-explosion result of Baras and Goldstein for $\Delta+(d/2-1)^2|x|^{-2}$, cf. \cite{MR742415} and \cite{MR3020137}. The results are illustrated by applications to transition densities with certain scaling properties. The structure of the paper is as follows. In Theorem~\ref{thm:perturbation} of Section~\ref{sec:Sc} we prove the non-explosion result for Schr\"odinger perturbations. In Theorem~\ref{thm:Hardy} of Section~\ref{sec:Hi} we prove the Hardy inequality. In fact, under mild additional assumptions we have a Hardy {\it equality} with an explicit remainder term. Sections~\ref{sec:A}, \ref{s:asjm} and \ref{sec:wls} present applications. In Section~\ref{sec:A} we recover the classical Hardy equalities for the quadratic forms of the Laplacian and fractional Laplacian. For completeness we also recover the best constants and the corresponding remainder terms, as given by Filippas and Tertikas \cite{MR1918494}, Frank, Lieb and Seiringer \cite{MR2425175} and Frank and Seiringer \cite{MR2469027}. In Section~\ref{s:asjm} we consider transition densities with weak global scaling in the setting of metric spaces. These include a class of transition densities on fractal sets (Theorem~\ref{t:hardyphi1} and Corollary~\ref{c:hardyphi1}) and transition densities of many unimodal L\'evy processes on $\Rd$ (Corollary~\ref{cor:hardyphi}). We prove Hardy inequalities for their quadratic forms. In Section~\ref{sec:wls} we focus on transition densities with weak local scaling on $\Rd$. The corresponding Hardy inequality is stated in Theorem~\ref{cor:hardyphi2}. The calculations in Sections~\ref{sec:A}, \ref{s:asjm} and \ref{sec:wls}, which produce explicit weights in Hardy inequalities, also give non-explosion results for specific Schr\"odinger perturbations of the corresponding transition densities by means of Theorem~\ref{thm:perturbation}. These are stated in Corollary~\ref{cor:aSp} and \ref{cor:aSpL} and Remark~\ref{rem:ef}, \ref{rem:efL} and \ref{rem:efLl}. Currently our methods are confined to the (bilinear) $L^2$ setting. We refer to \cite{MR2469027}, \cite{MR3117146} for other frameworks. Regarding further development, it is of interest to find relevant applications with less space homogeneity and scaling than required in the examples presented below, extend the class of considered time weights, prove explosion results for ``supercritical'' Schr\"odinger perturbations, and understand more completely when equality holds in our Hardy inequalities. Below we use ``$:=$" to indicate definitions, e.g. $a \wedge b := \min \{ a, b\}$ and $a \vee b := \max \{ a, b\}$. For two nonnegative functions $f$ and $g$ we write $f\approx g$ if there is a positive number $c$, called {\emph constant}, such that $c^{-1}\, g \leq f \leq c\, g$. Such comparisons are usually called \emph{sharp estimates}. We write $c=c(a,b,\ldots,z)$ to claim that $c$ may be so chosen to depend only on $a,b,\ldots,z$. For every function $f$, let $f_+:=f \vee 0$. For any open subset $D$ of the $d$-dimensional Euclidean space $\R^d$, we denote by $C^\infty_c (D)$ the space of smooth functions with compact supports in $D$, and by $C_c (D)$ the space of continuous functions with compact supports in $D$. In statements and proofs, $c_i$ denote constants whose exact values are unimportant. These are given anew in each statement and each proof. \begin{ack}We thank William Beckner for comments on the Hardy equality \eqref{eq:W12}. We thank Tomasz Byczkowski, Tomasz Grzywny, Tomasz Jakubowski, Kamil Kaleta, Agnieszka Ka\l{}amajska and Dominika Pilarczyk for comments, suggestions and encouragement. We also thank Rupert L.~Frank and Georgios Psaradakis for remarks on the literature related to Section~\ref{sec:A}. \end{ack} \section{ {Non-explosion for Schr\"odinger perturbations}}\label{sec:Sc} Let $(X, \mathcal{M}, m(dx))$ be a $\sigma$-finite measure space. Let $\mathcal{B}_{(0,\infty)}$ be the Borel $\sigma$-field on the halfline $(0,\infty)$. Let $p:( 0,\infty) \times X\times X\to [0,\infty]$ be $\mathcal{B}_{(0,\infty)}\times \mathcal{M}\times \mathcal{M}$-measurable and symmetric: \begin{equation}\label{eq:psymmetric} p_t(x,y)=p_t(y,x)\,,\quad x,y\in X\,,\quad t>0\,, \end{equation} and let $p$ satisfy the Chapman--Kolmogorov equations: \begin{equation} \label{eq:ck} \int_X p_s(x,y)p_t(y,z) m(dy)=p_{s+t}(x,z),\qquad x,z\in X,\; s,t> 0, \end{equation} and assume that for every $t>0, x \in X$, $p_t(x,y)m(dy)$ is ($\sigma$-finite) integral kernel. Let $f:\R \to [0,\infty)$ be non-decreasing, and let $f=0$ on $(-\infty, 0]$. We have $f'\geq 0$ a.e., and \begin{equation} \label{eq:fp} f(a)+\int_a^b f'(s)ds \leq f(b), \quad -\infty < a\leq b < \infty. \end{equation} {Further,} let $\mu$ be a~nonnegative $\sigma$-finite measure on $(X,\mathcal{M})$. We put \begin{align} p_s\mu(x) &= \int_X p_s(x, y) \,\mu(dy), \label{eq:psmu}\\ h(x) &= \int_0^\infty f(s) p_s\mu(x) \,ds. \label{eq:h} \end{align} We denote, as usual, $p_th(x)=\int_X p_t(x,y)h(y){m(dy)}$. By Fubini-Tonelli and Chapman-Kolmogorov, for $t>0$ and $x\in X$ we have \begin{align} \label{eq:kb5.5} p_th(x) &=\int_t^\infty f(s-t) p_s\mu(x) \,ds\\ &\leq\int_t^\infty f(s) p_s\mu(x) \,ds \nonumber \\ &\leq h(x).\label{eq:ph} \end{align} In this sense, $h$ is supermedian. We define $q:X\to [0,\infty]$ as follows: $q(x)=0$ if $h(x)=0$ or $\infty$, else \begin{equation}\label{eq:qmu} q(x) = \frac{1}{h(x)} \int_0^\infty f'(s) p_s\mu(x) \,ds. \end{equation} For all $x\in X$ we thus have \begin{equation}\label{eq:kb8.5} q(x)h(x)\leq \int_0^\infty f'(s) p_s\mu(x) \,ds. \end{equation} We define the Schr\"odinger perturbation of $p$ by $q$ \cite{MR2457489}: \begin{equation} \label{eq:pti} \tilde{p}=\sum_{n=0}^\infty p^{(n)}, \end{equation} where $p^{(0)}_t(x,y) = p_t(x,y)$, and \begin{equation} \label{eq:pn} p^{(n)}_t(x,y) = \int_0^t\int_{X} p_s(x,z)\,q(z)p^{(n-1)}_{t-s}(z,y)\,m(dz)\,ds,\quad n\geq 1. \end{equation} It is well-known that $\tilde{p}$ is a transition density \cite{MR2457489}. \begin{thm}\label{thm:perturbation} We have $\int_X \tilde{p}_t(x,y)h(y){m}(dy)\leq h(x)$. \end{thm} \begin{proof} For $n=0,1,\ldots$ and $t>0$, $x\in X$, we consider \begin{equation*}\label{eq:phn} p^{(n)}_th(x):=\int_X p^{(n)}_t(x,y)h(y)\,{m(dy)}, \end{equation*} and we claim that \begin{equation}\label{eq:fkb} \sum_{k=0}^n p^{(k)}_t h(x) \leq h(x). \end{equation} By (\ref{eq:ph}) this holds for $n=0$. By (\ref{eq:pn}), Fubini-Tonelli, induction and (\ref{eq:qmu}), \begin{align} \sum_{k=1}^{n+1}p^{(k)}_t h(x) &=\int_X \int_0^t \int_X p_s(x,z)\,q(z)\sum_{k=0}^np^{(k)}_{t-s}(z,y)h(y)\,{m(dy)}\,ds\,{m(dz)}\nonumber\\ &\leq \int_0^t \int_X p_s(x,z)\,q(z)h(z)\,{m(dz)}\,ds \nonumber\\ &=\int_0^t \int_X p_s(x,z)\int_0^\infty f'(u) \int_X p_u(z,w)\,\mu(dw)\,du\,{m(dz)}\,ds.\nonumber \\ &= \int_0^t \int_0^\infty f'(u) p_{s+u}\mu(x) \,du\,ds,\nonumber \end{align} where in the last passage we used \eqref{eq:ck} and \eqref{eq:psmu}. By \eqref{eq:fp}, \begin{align*} \sum_{k=1}^{n+1}p^{(k)}_t h(x) &\le \int_0^\infty \int_0^{u\wedge t} f'(u-s) \,ds\, p_u\mu(x) \,du\\ &\leq \int_0^\infty [f(u) - f(u - u\wedge t)] \,p_u\mu(x) \,du\\ &= \int_0^\infty [f(u) - f(u-t)] \,p_u\mu(x) \,du, \end{align*} because $f(s)=0$ if $s\le 0$. By this and \eqref{eq:kb5.5} we obtain \begin{align*} \sum_{k=0}^{n+1}p^{(k)}_t h(x) &\leq \int_t^\infty f(u-t) p_u\mu(x) \,du \\ &\quad + \int_0^\infty [f(u) - f(u-t)]\, p_u\mu(x) \,du\\ &= \int_0^\infty f(u) p_u\mu(x)\,du=h(x). \end{align*} The claim \eqref{eq:fkb} is proved. The theorem follows by letting $n\to \infty$. \end{proof} \begin{rem}\label{rem:sl} Theorem~\ref{thm:perturbation} asserts that $h$ is supermedian for $\tilde p$. This is much more than \eqref{eq:ph}, but \eqref{eq:ph} may also be useful in applications \cite[Lemma~5.2]{2014TJ}. \end{rem} We shall see in Section~\ref{sec:A} that the above construction gives integral finiteness (non-explosion) results for {specific} Schr\"odinger perturbations with rather singular $q$, cf. Corollaries~\ref{cor:aSp} and \ref{cor:aSpL}. In the next section $q$ will serve as an admissible weight in a Hardy inequality. \section{ Hardy inequality}\label{sec:Hi} Throughout this section we let $p$, $f$, $\mu$, $h$ and $q$ be as defined in Section~\ref{sec:Sc}. Additionally we shall assume that $p$ is Markovian, namely $\int_X p_t(x,y){m(dy)}\le 1$ for all $x\in X$. In short, $p$ is a subprobability transition density. By Holmgren criterion \cite[Theorem 3, p. 176]{MR1892228}, we then have $p_tu\in L^2(m)$ for each $u\in L^2(m)$, {in fact $\int_X [p_t u(x)]^2m(dx)\le \int_X u(x)^2 m(dx)$}. Here $L^2(m)$ is the collection of all the real-valued square-integrable $\mathcal{M}$-measurable functions on $X$. As usual, we identify $u,v\in L^2(m)$ if $u=v$ $m$-a.e. on $X$. The space (of equivalence classes) $L^2(m)$ is equipped with the scalar product $\langle u, v\rangle=\int_X u(x)v(x)m(dx)$. Since the semigroup of operators $(p_t,t>0)$ is self-adjoint and weakly measurable, we have $$ \langle p_t u,u\rangle=\int_{[0,\infty)}e^{-\lambda t}d\langle P_\lambda u,u\rangle, $$ where $P_\lambda$ is the spectral decomposition of the operators, see \cite[Section~22.3]{MR0423094}. For $u \in L^2(m)$ and $t>0$ we let $$ \EEE^{(t)}(u, u)= \frac{1}{t} \langle u - p_t u, u \rangle.$$ {By the spectral decomposition,} $t\mapsto \EEE^{(t)}(u, u)$ is nonnegative and nonincreasing \cite[Lemma 1.3.4]{MR2778606}, which allows to define the quadratic form of $p$, \begin{align} \label{def:DEEE} \EEE(u,u) &= \lim_{t\to 0} \EEE^{(t)}(u, u),\quad u\in L^2(m). \end{align} The domain of the form is defined by the condition $\EEE(u,u)<\infty$ \cite{MR2778606}. The following is a Hardy-type inequality with a remainder. \begin{thm}\label{thm:Hardy} If $u\in L^2(m)$ and $u=0$ on $\{x \in X: h(x)=0 \mbox{ or } \infty\}$, then \begin{align} &\EEE(u, u) \geq \int_{{X}} u(x)^2 q(x) \,m(dx) \label{eq:Hardy} \\ &+ \liminf_{t\to 0} \int_X \int_X \frac{p_t(x,y)}{2t} \left(\frac{u(x)}{h(x)}-\frac{u(y)}{h(y)} \right)^2 h(y)h(x) m(dy)m(dx). \nonumber \end{align} If $f(t)=t_+^\beta$ with $\beta\geq 0$ in \eqref{eq:h} or, more generally, if $f$ is absolutely continuous and there are $\delta>0$ and $c<\infty$ such that \begin{equation}\label{eqn:ff} [f(s) - f(s-t)]/t\le cf'(s) \qquad \mbox{for all $s>0$ and $0<t<\delta$}, \end{equation} then for every $u\in L^2(m)$ \begin{align} &\EEE(u, u) = \int u(x)^2 q(x) \,m(dx) \label{eq:Hardyv} \\ &+ \lim_{t\to 0} \int_X \int_X \frac{p_t(x,y)}{2t} \left(\frac{u(x)}{h(x)}-\frac{u(y)}{h(y)} \right)^2 h(y)h(x) m(dy)m(dx), \nonumber \end{align} \end{thm} \begin{proof} Let $v=u/h$, with the convention that $v(x)=0$ if $h(x)=0$ or $\infty$. Let $t>0$. We note that $|vh|\le |u|$, thus $vh\in L^2(m)$ and by \eqref{eq:ph}, $v p_th \in L^2(m)$. We then have \begin{align}\label{eq:rEtnJtIt} \EEE^{(t)}(vh, vh) &= \langle v \frac{h-p_th}{t}, vh \rangle + \langle \frac{vp_th-p_t(vh)}{t} , vh \rangle =: I_t + J_t. \end{align} By the definition of $J_t$ and the symmetry \eqref{eq:psymmetric} of $p_t$, \begin{align*} J_t &= \frac{1}{t} \int_X \int_X p_t(x,y) [v(x)-v(y)] h(y)\, m(dy)\, v(x) h(x) \,m(dx) \\ &= \int_X \int_X \frac{p_t(x,y)}{2t} [v(x)-v(y)]^2 h(x)h(y) \,m(dx)\,m(dy) \geq 0. \end{align*} To deal with $I_t$, we let $x\in X$, assume that $h(x)<\infty$, and consider \begin{align*} (h-p_th)(x) &= \int_0^\infty f(s) p_s\mu(x) \,ds - \int_0^\infty f(s) p_{s+t}\mu(x) \,ds \\ &=\int_0^\infty [f(s) - f(s-t)] p_s\mu(x) \,ds. \end{align*} Thus, \begin{align*} I_t&=\int_X v^2(x)h(x) \int_0^\infty \frac1t[f(s) - f(s-t)]\ p_s\mu(x) \,ds\, m(dx). \end{align*} By \eqref{def:DEEE} and Fatou's lemma, \begin{align} &\EEE(vh, vh) \geq \int_X \int_0^\infty f'(s) p_s\mu(x) \,ds\, v^2(x)h(x)\,m(dx) \label{eq:prh} \\ &+ \liminf_{t\to 0} \int_X \int_X \frac{p_t(x,y)}{2t} \left[v(x)-v(y)\right]^2 h(y)h(x) m(dy)m(dx)\nonumber\\ &={ \int_X v^2(x)h^2(x) q(x) m(dx)}\nonumber\\ &{+ \liminf_{t\to 0} \int_X \int_X \frac{p_t(x,y)}{2t} \left[v(x)-v(y)\right]^2 h(y)h(x) m(dy)m(dx)}. \nonumber \end{align} Since $u=0$ on $\{x \in X: h(x)=0 \mbox{ or } \infty\}$, we have $u=vh$, hence $\EEE^{(t)}(u, u) =\EEE^{(t)}(vh, vh)$ for all $t>0$, and so $\EEE(u, u) =\EEE(vh, vh)$. From \eqref{eq:prh} we obtain \eqref{eq:Hardy}. If $f$ is absolutely continuous on $\R$, then \eqref{eq:fp} becomes equality, {and we return to \eqref{eq:rEtnJtIt} to analyse $I_t$ and $J_t$ more carefully.} If $\int_X u(x)^2 q(x) \,m(dx)<\infty$, which is satisfied in particular when $\EEE(u,u)<\infty$, and if \eqref{eqn:ff} holds, then we can apply Lebesgue dominated convergence theorem to $I_t$. In view of \eqref{def:DEEE} and \eqref{eq:rEtnJtIt}, the limit of $J_t$ then also exists, and we obtain \eqref{eq:Hardyv}. If $\int_X u(x)^2 q(x) \,m(dx)=\infty$, then \eqref{eq:prh} trivially becomes equality. Finally, \eqref{eqn:ff} holds for $f(t)=t_+^\beta$ with $\beta\ge 0$. \end{proof} \begin{cor}For every $u\in L^2(m)$ we have $\EEE(u, u) \geq \int_{{X}} u(x)^2 q(x) \,m(dx)$. \end{cor} We are interested in non-zero quotients $q$. This calls for lower bounds of the numerator and upper bounds of the denominator. The following consequence of \eqref{eq:Hardy} applies when sharp estimates of $p$ are known. \begin{cor}\label{rem:cppb} Assume there are a $\mathcal{B}_{(0,\infty)}\times \mathcal{M}\times \mathcal{M}$-measurable function $\bar p$ and a constant $c\ge 1$ such that for every $(t,x,y) \in (0, \infty)\times X \times X$, \begin{equation*}\label{eqn:hcomp} c^{-1} p_t(x,y) \le \bar p_t(x,y) \le c p_t(x,y). \end{equation*} Let $$ \bar h(x) = \int_0^\infty \int_X f(s) \bar p_s(x, y) \mu(dy) \,ds, $$ and let $\bar q(x)=0$ if $\bar h(x)=0$ or $\infty$, else let $$ \bar q(x) = \frac{1}{\bar h(x)} \int_0^\infty f'(s) \bar p_s\mu(x) \,ds. $$ Then $c^{-1}h\le \bar h \le c h$, $c^{-2}q\le \bar q \le c^2 q$, and for $u\in L^2(m)$ such that $u=0$ on $X\cap \{\bar h=0 \mbox{ or } \infty\}$, we have \begin{align} &\EEE(u, u) \geq c^{-2}\int u(x)^2 q(x) \,m(dx). \label{eq:Hardynew} \end{align} \end{cor} In the remainder of the paper we discuss applications of the results in Section~\ref{sec:Sc} and Section~\ref{sec:Hi} to transition densities with certain scaling properties. \section{Applications to (fractional) Laplacian}\label{sec:A} Let $0<\alpha<2$, $d\in \mathbb{N}$, $\mathcal{A}_{d,-\alpha}= 2^{\alpha}\Gamma\big((d+\alpha)/2\big)\pi^{-d/2}/|\Gamma(-\alpha/2)|$ and $\nu(x,y)=\mathcal{A}_{d,-\alpha}|y-x|^{-d-\alpha}$, where $x,y\in \Rd$. Let $m(dx)=dx$, the Lebesgue measure on $\Rd$. Throughout this section, $g$ is the Gaussian kernel \begin{equation}\label{eq:Gk} g_t(x) = (4\pi t)^{-d/2} e^{-|x|^2/(4t)}\,, \quad t>0,\quad x\in \Rd\,. \end{equation} For $u\in L^2(\R^d,dx)$ we define \begin{equation}\label{eqHifL} \mathcal{E}(u,u)= \frac12 \int_{\R^d}\!\int_{\R^d} [u(x)-u(y)]^2\nu(x,y) \,dy\,dx. \end{equation} The important case $\beta=(d-\alpha)/(2\alpha)$ in the following Hardy equality for the Dirichlet form of the fractional Laplacian was given by Frank, Lieb and Seiringer in \cite[Proposition 4.1]{MR2425175} (see \cite{MR2984215} for another proof; see also \cite{MR0436854}). In fact, \cite[formula (4.3)]{MR2425175} also covers the case of $(d-\alpha)/(2\alpha) \leq \beta \leq (d/\alpha)-1$ and smooth compactly supported functions $u$ in the following Proposition. Our proof is different from that of \cite[Proposition 4.1]{MR2425175} because we do not use the Fourier transform. \begin{prop}\label{cor:FS} If $0<\alpha<d\wedge 2$, $0 \le \beta \le(d/\alpha)-1$ and $u\in L^2(\Rd)$, then \begin{align*} &\mathcal{E}(u,u)= C \int_{\R^d} \frac{u(x)^2}{|x|^\alpha}\,dx + \int_{\R^d}\!\int_{\R^d} \left[\frac{u(x)}{h(x)}-\frac{u(y)}{h(y)}\right]^2 h(x)h(y) \nu(x,y) \,dy\,dx\,, \end{align*} where $C=2^{\alpha} \Gamma(\frac{d}{2} - \tfrac{\alpha\beta}{2})\Gamma(\frac{\alpha(\beta+1)}{2})\Gamma(\frac{d}{2} - \tfrac{\alpha(\beta+1)}{2})^{-1}\Gamma(\frac{\alpha \beta }{2})^{-1}$, $h(x)=|x|^{\alpha(\beta+1)-d}$. We get a maximal $C=2^{\alpha} \Gamma(\tfrac{d+\alpha}{4})^2 \Gamma(\tfrac{d-\alpha}{4})^{-2}$ if $\beta=(d-\alpha)/(2\alpha)$. \end{prop} \begin{proof} \eqref{eqHifL} is the Dirichlet form of the convolution semigroup of functions defined by subordination, that is we let $p_t(x,y)=p_t(y-x)$, where \begin{equation}\label{eq:stable-pt} p_t(x) = \int_0^\infty g_s(x) \eta_t(s)\,ds, \end{equation} $g$ is the Gaussian kernel defined in \eqref{eq:Gk} and $\eta_t\ge 0$ is the density function of the distribution of the $\alpha/2$-stable subordinator at time $t$, see, e.g., \cite{MR2569321} and \cite{MR2778606}. Thus, $\eta_t(s) = 0$ for $s\le 0$, and \[ \int_0^\infty e^{-us} \eta_t(s)\,ds = e^{-tu^{\alpha/2}}, \quad u\geq 0. \] Let $-1<\beta<d/\alpha-1$. The Laplace transform of $s\mapsto \int_0^\infty t^\beta \eta_t(s)\,dt$ is \begin{align*} \int_0^\infty \int_0^\infty t^\beta \eta_t(s)\,dt\, e^{-us} \,ds & =\int_0^\infty t^\beta \int_0^\infty \eta_t(s) e^{-us} \,ds\,dt =\int_0^\infty t^\beta e^{-tu^{\alpha/2}} \,dt \\ &= \Gamma(\beta+1) u^{-\frac{\alpha(\beta+1)}{2} }. \end{align*} Since $\int_0^\infty e^{-us} s^\gamma \,ds = \Gamma(\gamma+1) u^{-(\gamma+1)}$, \begin{equation}\label{int:tbetapt} \int_0^\infty t^\beta \eta_t(s)\,dt = \frac{\Gamma(\beta+1)}{\Gamma(\frac{\alpha(\beta+1)}{2})} s^{\frac{\alpha(\beta+1)}{2}-1}. \end{equation} We consider $-\infty< \delta < d/2 - 1$ and calculate the following integral for the Gaussian kernel by substituting $s=|x|^2/(4t)$, \begin{align}\label{int:tbetagt} \int_0^\infty g_t(x) t^\delta \,dt &= \int_0^\infty (4\pi t)^{-d/2} e^{-|x|^2/(4t)} t^\delta\,dt \\ &= (4\pi)^{-d/2} \left(\frac{|x|^2}{4}\right)^{\delta-d/2+1} \int_0^\infty s^{d/2-\delta-2} e^{-s}\,ds\nonumber\\ &= 4^{-\delta-1} \pi^{-d/2} |x|^{2\delta-d+2} \Gamma(d/2 - \delta - 1). \nonumber \end{align} {For} $f(t):=t_+^\beta$ and $\sigma$-finite Borel measure $\mu\ge 0$ on $\Rd$ we have \begin{align*} h(x)&:=\int_0^\infty \int_\Rd f(t) p_t(x-y)\mu(dy) \,dt \\ &=\int_0^\infty \int_\Rd t^\beta \int_0^\infty g_s(x-y)\eta_t(s)\,ds \, \mu(dy) \,dt\\ &= \int_\Rd \int_0^\infty \int_0^\infty t^\beta\eta_t(s) \,dt\, g_s(x-y)\,ds \, \mu(dy) \\ &= \int_\Rd \int_0^\infty \frac{\Gamma(\beta+1)}{\Gamma(\frac{\alpha(\beta+1)}{2})} s^{\frac{\alpha(\beta+1)}{2}-1} g_s(x-y)\,ds \, \mu(dy) \\ &=\frac{\Gamma(\beta+1)}{\Gamma(\frac{\alpha(\beta+1)}{2})} \frac{\Gamma(\frac{d}{2} - \tfrac{\alpha(\beta+1)}{2})}{4^\frac{\alpha(\beta+1)}{2} \pi^{d/2}} \int_\Rd |x-y|^{\alpha(\beta+1)-d}\, \mu(dy), \end{align*} where in the last two equalities we assume $\alpha(\beta+1)/2-1 < d/2-1$ and use \eqref{int:tbetapt} and \eqref{int:tbetagt}. If, furthermore, $\beta\ge 0$, then by the same calculation \begin{align*} \int_0^\infty \int_\Rd& f'(t) p_t(x,y)\mu(dy) \,dt \\ &=\beta \frac{\Gamma(\beta)}{\Gamma(\frac{\alpha \beta }{2})} 4^{-\frac{\alpha\beta}{2}} \pi^{-d/2} \Gamma(\frac{d}{2} - \tfrac{\alpha\beta}{2}) \int_\Rd |x-y|^{\alpha\beta-d}\, \mu(dy). \end{align*} Here the expression is zero if $\beta=0$. If $\mu=\delta_0$, then we get \begin{equation}\label{eq:whuL} h(x)=\frac{\Gamma(\beta+1)}{\Gamma(\frac{\alpha(\beta+1)}{2})} \frac{\Gamma(\frac{d}{2} - \tfrac{\alpha(\beta+1)}{2})}{4^\frac{\alpha(\beta+1)}{2} \pi^{d/2}} |x|^{\alpha(\beta+1)-d} \end{equation} and \begin{align}\label{eq:wquL} q(x) &= \frac{1}{h(x)}\int_0^\infty \int_\Rd f'(t) p_t(x,y)\mu(dy) \,dt\nonumber\\ & = \frac{ 4^{\alpha/2} \Gamma(\frac{d}{2} - \tfrac{\alpha\beta}{2})\Gamma(\frac{\alpha(\beta+1)}{2}) } { \Gamma(\frac{d}{2} - \tfrac{\alpha(\beta+1)}{2})\Gamma(\frac{\alpha \beta }{2}) } |x|^{-\alpha}. \end{align} By homogeneity, we may assume $h(x)=|x|^{\alpha(\beta+1)-d}$, without changing $q$. By the second statement of Theorem~\ref{thm:Hardy}, it remains to show that \begin{align} \lim_{t\to 0} & \int_{\R^d} \int_{\R^d} \frac{p_t(x,y)}{2t} \left[\frac{u(x)}{h(x)}-\frac{u(y)}{h(y)} \right]^2 h(y)h(x) dydx \nonumber\\ &=\frac12 \int_{\R^d}\!\int_{\R^d} \left[\frac{u(x)}{h(x)}-\frac{u(y)}{h(y)}\right]^2 h(y)h(x)\nu(x,y) \,dy\,dx\,. \label{eq:remains} \end{align} Since $p_t(x,y)/t \leq \nu(x,y)$ \cite{MR3165234} and $p_t(x,y)/t \to \nu(x,y)$ as $t\to 0$, \eqref{eq:remains} follows either by the dominated convergence theorem, if the right hand side of \eqref{eq:remains} is finite, or -- in the opposite case -- by Fatou's lemma. If $\alpha\beta = (d-\alpha)/2$, then we obtain $h(x)=|x|^{-(d-\alpha)/2}$ and the maximal \[ q(x) = \frac{ 4^{\alpha/2} \Gamma(\tfrac{d+\alpha}{4})^2 } { \Gamma(\tfrac{d-\alpha}{4})^2 } |x|^{-\alpha}. \] Finally, the statement of the proposition is trivial for $\beta=d/\alpha-1$. \end{proof} \begin{cor}\label{cor:aSp} If $0\le r\le d-\alpha$, $x\in \Rd$ and $t>0$, then $$\int_{\Rd} p_t(y-x)|y|^{-r}dy\le |x|^{-r}.$$ If $0<r< d-\alpha$, $x\in \Rd$, $t>0$, $\beta=(d-\alpha-r)/\alpha$, $q$ is given by \eqref{eq:wquL} and $\tilde p$ is given by \eqref{eq:pti}, then $$\int_{\Rd} \tilde p_t(y-x)|y|^{-r}dy\le |x|^{-r}.$$ \end{cor} \begin{proof} By \eqref{eq:ph} and the proof of Proposition~\ref{cor:FS}, we get the first estimate. The second estimate is stronger, because $\tilde p\ge p$, cf. \eqref{eq:pti}, and it follows from Theorem~\ref{thm:perturbation}, cf. the proof of Proposition~\ref{cor:FS}. We do not formulate the second estimate for $r=0$ and $d-\alpha$, because the extension suggested by \eqref{eq:wquL} reduces to a special case of the first estimate. \end{proof} For completeness we now give Hardy equalities for the Dirichlet form of the Laplacian in $\Rd$. Namely, \eqref{eq:W12} below is the optimal classical Hardy equality with remainder, and \eqref{eq:cHg} is its slight extension, in the spirit of Proposition~\ref{cor:FS}. For the equality \eqref{eq:W12}, see for example \cite[formula (2.3)]{MR1918494}, \cite[Section~2.3]{MR2469027} or \cite{MR2984215}. Equality \eqref{eq:cHg} may also be considered as a corollary of \cite[Section~2.3]{MR2469027}. \begin{prop}\label{cor:W12} Suppose $d\geq 3$ and $0 \le \gamma \le d-2$. For $u\in W^{1,2}(\Rd)$, \begin{align}\label{eq:cHg} \int_\Rd |\nabla u(x)|^2 dx \! =\!\gamma (d-2-\gamma) \!\int_\Rd \frac{u(x)^2}{|x|^{2}}dx+\!\int_\Rd \left|{h(x)}\nabla\frac{u}{h}(x)\right|^2dx, \end{align} where $h(x)=|x|^{\gamma+2-d}$. In particular, \begin{equation}\label{eq:W12} \!\!\int_{\R^d} |\nabla u(x)|^2\,dx = \frac{(d-2)^2}{4} \!\!\int_{\R^d} \frac{u(x)^2}{|x|^2}\,dx+\!\!\int_\Rd \left|{|x|^{\frac{2-d}{2}}}\nabla\frac{u(x)}{|x|^{(2-d)/2}}\right|^2dx. \end{equation} \end{prop} \begin{proof} The first inequality is trivial for $\gamma=d-2$, so let $0\le \gamma<d-2$. We first prove that for $u\in L^2(\R^d,dx)$, \begin{align} \mathcal{C}(u,u)&= \gamma(d-2-\gamma) \int_{\R^d} \frac{u(x)^2}{|x|^2}\,dx \nonumber\\ & +\lim_{t\to 0} \int_{\R^d} \int_{\R^d} \frac{{g}_t(x,y)}{2t} \left(\frac{u}{h}(x)-\frac{u}{h}(y)\right)^2 h(y)h(x) dy dx,\label{eq:W12n} \end{align} where $g$ is the Gaussian kernel defined in \eqref{eq:Gk}, and $\mathcal{C}$ is the corresponding quadratic form. Even simpler than in the proof of Proposition~\ref{cor:FS}, we let $f(t) = t^{\gamma/2}$ and $\mu=\delta_0$, obtaining \begin{align} h(x) &:= \int_0^\infty f(s) g_s \mu(x)\,ds= \int_0^\infty \int_{\R^d} f(s) g_s(x-y) \mu(dy)\,ds \nonumber\\ &= \int_{\R^d} 4^{-\gamma/2-1} \pi^{-d/2} |x-y|^{\gamma-d+2} \Gamma(d/2 - \gamma/2 - 1) \mu(dy) \nonumber\\ &= 4^{-\gamma/2-1} \pi^{-d/2} |x|^{\gamma-d+2} \Gamma(d/2 - \gamma/2 - 1),\nonumber\\ \int_0^\infty f'(s) g_s\mu(x) \,ds &= \frac{\gamma}{2} 4^{-\gamma/2} |x|^{\gamma-d} \pi^{-d/2} \Gamma(d/2 - \gamma/2),\nonumber\\ q(x) &= \frac{\int_0^\infty f'(s) g_s\mu(x) \,ds}{h(x)} = \frac{\gamma(d-2-\gamma)}{|x|^2}.\label{eq:dqL} \end{align} By Theorem~\ref{thm:Hardy} we get \eqref{eq:W12n}. Since the quadratic form of the Gaussian semigroup is the classical Dirichlet integral, taking $\gamma=(d-2)/2$ and $q(x)=(d-2)^2/(4|x|^2)$ we recover the classical Hardy inequality: \begin{equation}\label{eq:W12eq} \int_{\R^d} |\nabla u(x)|^2\,dx \ge \frac{(d-2)^2}{4} \int_{\R^d} \frac{u(x)^2}{|x|^2}\,dx,\qquad u\in L^2(\Rd,dx). \end{equation} We, however, desire \eqref{eq:cHg}. It is cumbersome to directly prove the convergence of \eqref{eq:W12n} to \eqref {eq:cHg}\footnote{But see a comment before \cite[(1.6)]{MR2984215} and our conclusion below.}. Here is an approach based on calculus. For $u\in C_c^\infty(\Rd\setminus\{0\})$ we have \begin{align*} &\partial_j \left(|x|^{d-2-\gamma}u(x)\right) = (d-2-\gamma) |x|^{d-4-\gamma}u(x) x_j + |x|^{d-2-\gamma} u_j(x),\\ &\left|\nabla \left(|x|^{d-2-\gamma}u(x)\right)\right|^2 = |x|^{2(d-4-\gamma)} \big[(d-2-\gamma)^2 u(x)^2 |x|^2 +|x|^{4} |\nabla u(x)|^2 \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\; + (d-2-\gamma) \left< \nabla (u^2)(x), x\right> |x|^2 \big], \end{align*} hence \begin{align*} \int_\Rd \left|\nabla\frac{u}{h}(x)\right|^2h(x)^2dx &=\int_\Rd \left|\nabla \left(|x|^{d-2-\gamma}u(x)\right)\right|^2 |x|^{2(\gamma+2-d)}dx \nonumber \\ &=(d-2-\gamma)^2 \int_\Rd u(x)^2 |x|^{-2}dx + \int_\Rd |\nabla u(x)|^2 dx \nonumber\\ &\quad +(d-2-\gamma) \int_\Rd \left< \nabla (u^2)(x), |x|^{-2}x\right> dx . \label{e:still} \end{align*} Since ${\rm div}(|x|^{-2}x)=(d-2) |x|^{-2}$, the divergence theorem yields \eqref{eq:cHg}. We then extend \eqref{eq:cHg} to $u\in C^\infty_c(\Rd)$ as follows. Let $\psi\in C_c^\infty(\Rd)$ be such that $0\le \psi \le 1$, $\psi(x)=1$ if $|x|\le 1$, $\psi(x)=0$ if $|x|\ge 2$. Let $u_n(x)=u(x)[1-\psi(nx)]$, $n\in \N$. We note the local integrability of $|x|^{-2}$ in $\Rd$ with $d\ge 3$. We let $n\to \infty$ and have \eqref{eq:cHg} hold for $u$ by using the convergence in $L^2(|x|^{-2}dx)$, inequality $|\nabla u_n(x)| \leq |\nabla u(x)| + c_1|u(x)| |x|^{-1}$, the identity $h(x)\nabla(u_n/h)(x)=\nabla u_n(x)-u_n(x)[\nabla h(x)]/h(x)$ for $x\neq 0$, the observation that $|\nabla h(x)|/h(x)\leq c|x|^{-2}$ and the dominated convergence theorem. We can now extend \eqref{eq:cHg} to $u\in W^{1,2}(\Rd)$. Indeed, assume that $C^\infty_c(\Rd)\ni v_n\to u$ and $\nabla v_n\to g$ in $L^2(\Rd)$ as $n\to \infty$, so that $g=\nabla u$ in the sense of distributions. We have that $h(x)\nabla(v_n/h)(x)=\nabla v_n(x)-v_n(x)[\nabla h(x)]/h(x)\to g-u [\nabla h(x)]/h(x)$ in $L^2(\Rd)$. The latter limit is $h\nabla(u/h)$, as we understand it. We obtain the desired extension of \eqref{eq:cHg}. As a byproduct we actually see the convergence of the last term in \eqref{eq:W12n}. Taking $\gamma=(d-2)/2$ in \eqref{eq:cHg} yields \eqref{eq:W12}. \end{proof} We note that \eqref{eq:W12eq} holds for all $u\in L^2(\Rd)$. \begin{cor}\label{cor:aSpL} If $0\le r\le d-2$, $x\in \Rd$ and $t>0$, then $$\int_{\Rd} g_t(y-x)|y|^{-r}dy\le |x|^{-r}.$$ If $0< r< d-2$, $x\in \Rd$, $t>0$, $\beta=(d-2-r)/2$, $q$ is given by \eqref{eq:dqL}, and $\tilde g$ is the Schr\"odinger perturbation of $g$ by $q$ as in \eqref{eq:pti}, then $$\int_{\Rd} \tilde g_t(y-x)|y|^{-r}dy\le |x|^{-r}.$$ \end{cor} The proof is similar to that of Corollary~\ref{cor:aSp} and is left to the reader. \section{Applications to transition densities with global scaling}\label{s:asjm} In this section we show how sharp estimates of transition densities satisfying certain scaling conditions imply Hardy inequalities. In particular we give Hardy inequalities for symmetric jump processes on metric measure space studied in \cite{MR2357678}, and for unimodal L\'evy processes recently estimated in \cite{MR3165234}. {In what follows we assume that $\phi:[0,\infty)\to [0,\infty)$ is nondecreasing and left-continuous, $\phi(0)=0$, $\phi(x)>0$ if $x>0$ and $\phi(\infty^-):=\lim_{x\to \infty}\phi(x)=\infty$.} We denote, as usual, \[ \phi^{-1}(u) = \inf \{s>0: \phi(s) {>}u\}, \qquad u {\geq}0. \] {Here is a simple observation, which we give without proof. \begin{lem}\label{lem:tp} Let $r,t\ge 0$. We have $t\ge \phi(r)$ if and only if $\phi^{-1}(t)\ge r$. \end{lem} } {We see that $\phi^{-1}$ is upper semicontinuous, hence right-continuous, $\phi^{-1}(\infty^{-})=\infty$, $\phi(\phi^{-1}(u))\leq u$ and $\phi^{-1}(\phi(s))\geq s$ for $s,u\ge 0$. If $\phi$ is continuous, then $\phi(\phi^{-1}(u))= u$, and if $\phi$ is strictly increasing, then $\phi^{-1}(\phi(s))= s$ for $s,u\ge 0$. Both these conditions typically hold in our applications, and then $\phi^{-1}$ is the genuine inverse function. } We first recall, after \cite[Section 3]{MR3165234}, \cite[Section 2]{MR3131293} and \cite[(2.7) and (2.20)]{MR2555291}, the following weak scaling conditions. We say that a function $\varphi: [0, \infty) \to [0, \infty)$ satisfies the global weak lower scaling condition if there are numbers $\la \in \R $ and $\lc \in (0,1] $, such that \begin{equation}\label{eq:LSC} \varphi(\lambda\theta)\, \ge\, \lc \,\lambda^{\,\la} \varphi(\theta),\quad \quad \lambda\ge 1, \quad\theta > 0. \end{equation}We then write $\varphi\in\WLSC(\la, {\lc})$. Put differently, $\varphi(R)/\varphi(r)\ge \lc \left(R/r\right)^{\la}$, $0<r\le R$. The global weak upper scaling condition holds if there are numbers $\ua \in \R$ and $\uc \in [1,\infty) $ such that \begin{equation}\label{eq:USC} \varphi(\lambda\theta)\,\le\, \uc\,\lambda^{\,\ua} \varphi(\theta),\quad\quad \lambda\ge 1, \quad\theta > 0, \end{equation} or $\varphi(R)/\varphi(r)\le \uc \left(R/r\right)^{\ua}$, $0<r\le R$. In short, $\varphi\in\WUSC(\ua, \uc)$. We note that $\varphi$ has the lower scaling if and only if $\varphi(\theta)/\theta^\la$ is almost increasing, {i.e. comparable with a nondecreasing function} on $[0,\infty)$, and $\varphi$ has the upper scaling if and only if $\varphi(\theta)/\theta^\ua$ is almost decreasing, see \cite[Lemma~11]{MR3165234}. Let $(F, \rho, m)$ be a metric measure space with metric $\rho$ and Radon measure $m$ with full support. We denote $B(x,r)=\{y\in F: \rho(x,y)<r\}$ and assume that there is a nondecreasing function $V:[0,\infty)\to [0,\infty)$ such that $V(0)=0$, $V(r)>0$ for $r>0$, and \begin{equation} c_1\, V(r) \leq m (B(x, r)) \leq c_2\, V(r) \quad \text{for all } x\in F \text{ and }r {\geq} 0. \label{eqn:univdn} \end{equation} We call $V$ the volume function. \begin{thm}\label{t:hardyphi1} Let $p$ be a symmetric subprobability transition density on $F$, with Dirichlet form $\EEE$, and assume that \begin{align}\label{e:etd} p_t( x,y)&\approx \frac 1{V(\phi ^{-1}(t))}\wedge \frac{t}{V(\rho(x,y))\phi (\rho(x,y))},\quad t>0,\quad x, y \in F, \end{align} where { $\phi , V: [0,\infty)\to (0,\infty)$ are non-decreasing, positive on $(0,\infty)$, $\phi(0)=V(0)=0$, $\phi(\infty^-)=\infty$} and $V$ satisfies \eqref{eqn:univdn}. If $\lA > \ua>0$, $V\in \WLSC(\lA, \lC )$ and $\phi \in \WUSC(\ua, \uc)$, then there is $C>0$ such that \begin{equation}\label{eq:hardyphi1} \EEE(u, u) \ge C \int_{F} \frac{u(x)^2 }{\phi (\rho(x,y))}\,m(dx), \qquad y \in F,\quad u\in L^2(F,m). \end{equation} \end{thm} \begin{proof} Let $y \in F$ and $u\in L^2(F,m)$. The constants in the estimates below are independent of $y$ and $u$. Let $ 0<\beta< \lA\ \!/\ua-1$ and define \begin{equation}\label{e:defh} h(x)= \int_0^{\infty} t^\beta p_t(x, y)\, dt, \quad x \in F. \end{equation} We shall prove that \begin{align}\label{eq:hardyphi11} &\EEE(u, u) \approx \int_{F} \frac{u(x)^2 }{\phi(\rho(x,y))}\,m(dx) \\ &+ \liminf_{t\to 0} \int_F \int_F \frac{p_t(x,z)}{2t} \left(\frac{u(x)}{h(x)}-\frac{u(z)}{h(z)} \right)^2 h(z)h(x) m(dz)m(dx). \nonumber \end{align} To this end, we first verify \begin{equation}\label{e:ah} h(x) \approx \phi (\rho(x, y))^{\beta+1} /V(\rho(x, y)), \qquad \rho(x, y)>0. \end{equation} Indeed, letting $r=\rho(x, y)>0$ we {first note that Lemma~\ref{lem:tp} yields $t\ge \phi(r)$ equivalent to $tV(\phi^{-1}(t))\ge V(r)\phi(r)$, from whence} \begin{align*} h(x) &\approx V(r)^{-1} \phi (r)^{-1} \int_{0}^{\phi (r)} t^{\beta+1}\,dt + \int_{\phi (r)}^\infty \frac{t^\beta}{V(\phi ^{-1}(t)) }\, dt \\ &= (\beta+2)^{-1} \phi (r)^{\beta+1}/V(r) + I. \end{align*} To estimate $I$, we observe that the assumption $\phi \in \WUSC(\ua, \uc)$ implies $\phi^{{-1}} \in \WLSC(1/\ua, \uc^{\ \! -1/\ua})$ \cite[Remark~4]{MR3165234}. If $r>0$ and $t\ge \phi(r)$, then \begin{align*} \frac{V(\phi^{-1}(t))}{V(r)}&\ge \frac{V(\phi^{-1}(t))}{V(\phi^{-1}(\phi(r)))} \ge \lC \left(\frac{\phi^{-1}(t)}{\phi^{-1}(\phi(r))} \right)^\lA\\ & \ge \lC\ \! \uc^{\ \!-\lA\ \!/\ua} \frac{t^{\ \! \lA\ \!/\ua}}{\phi(r)^{\ \!\lA\ \!/\ua}}, \end{align*} hence, \begin{align*} \frac{t^\beta}{V(\phi ^{-1}(t)) } &\le \frac{\uc^{\ \!\lA\ \!/\ua}}{ \lC } \frac{\phi ({r})^{ \lA\ \!/\ua}\ t^{\beta-\lA\ \!/\ua}}{V(r)} . \end{align*} The claim \eqref{e:ah} now follows because \begin{align}\label{e:77} I &\leq \frac{\uc^{\ \!\lA\ \!/\ua}}{ \lC } \frac{\phi ({r})^{ \lA\ \!/\ua}}{V(r)} \int_{\phi (r)}^\infty t^{\beta-\lA\ \!/\ua} \, dt = \frac{\uc^{\ \!\lA\ \!/\ua} }{ \lC (\lA\ \!/\ua -1-\beta) } \, \frac{\phi ({r})^{\beta+1}}{ V(r)}. \end{align} The function $$ k(x):= \int_0^{\infty} p_t(x,y) (t^\beta)' \,dt, \quad x\in F, $$ also satisfies $$ k(x) \approx \phi (\rho(x, y))^{\beta} /V(\rho(x, y)), \quad x\in F. $$ This follows by recalculating \eqref{e:ah} for $\beta-1$. We get \begin{equation}\label{e:dmp} C_1 \phi (\rho(x, y))^{-1} \le q(x) := \frac{k(x)}{h(x)} \le C_2 \phi (\rho(x, y))^{-1}, \end{equation} by choosing any $\beta \in (0, \lA\ \!/\ua-1)$. The theorem follows from \eqref{eq:Hardyv}. \end{proof} \begin{rem}\label{rem:ef} With the above notation, for each $0<\beta< \lA\ \!/\ua-1$ there exists a constant $c$ such that for all $x,y\in F$ we have $$\int p_t(x,z)\phi (\rho(z, y))^{\beta+1} /V(\rho(z, y))m(dz)\le c \phi (\rho(x, y))^{\beta+1} /V(\rho(x, y)) $$ and $$ \int \tilde p_t(x,z)\phi (\rho(z, y))^{\beta+1} /V(\rho(z, y))m(dz)\le c \phi (\rho(x, y))^{\beta+1} /V(\rho(x, y)), $$ where $\tilde p$ is given by \eqref{eq:pti} with $q(x)=C_1 \phi (\rho(x, y))^{-1}$ on $F$ and $C_1$ is the constant in the lower bound of the sharp estimate in \eqref{e:dmp}. This is a non-explosion result for $\tilde p$, and it is proved in the same way as Corollary~\ref{cor:aSp}. \end{rem} \begin{rem} Interestingly, the Chapman-Kolmogorov equations and \eqref{e:etd} imply that $\phi$ in Theorem~\ref{t:hardyphi1} satisfies a lower scaling, too. We leave the proof of this fact to the interested reader because it is not used in the sequel. An analogous situation occurs in \cite[Theorem26]{MR3165234}. \end{rem} In \cite{MR2357678} a wide class of transition densities are constructed on locally compact separable metric measure spaces $(F, \rho, m)$ with metric $\rho$ and Radon measure $m$ of infinite mass and full support on $F$. Here are some of the assumptions of \cite{MR2357678} (for details see \cite[Theorem 1.2]{MR2357678}). The functions $\phi, V:[0,\infty)\to(0,\infty)$ are increasing, $\phi(0)=V(0)=0$, $\phi (1)=1$, $\phi \in \WLSC(\la , \lc) \cap \WUSC(\ua, \uc)$, $V\in \WLSC(\lA, \lC ) \cap \WUSC(\uA, \uC)$, and $$\int_0^r\frac {s}{\phi (s)}\, ds \,\leq \,{c} \, \frac{ r^2}{\phi (r)} \quad \hbox{for every } r>0. $$ A symmetric measurable function $J(x, y)$ satisfying \begin{align}\label{eqn:Je} J(x, y) \approx \frac{1}{V(\rho (x, y)) \phi (\rho (x, y))},\qquad x,y\in F,x\neq y, \end{align} is considered in \cite[Theorem~1.2]{MR2357678} along with a symmetric pure-jump Markov process having $J$ as jump kernel and symmetric $p$ satisfying \eqref{e:etd} as transition density. By Theorem \ref{t:hardyphi1}, we obtain the following result. \begin{cor}\label{c:hardyphi1} Under the assumptions of \cite[Theorem 1.2]{MR2357678}, \eqref{eq:hardyphi1} and \eqref{eq:hardyphi11} hold if $\lA > \ua$. \end{cor} We now specialize to $F=\Rd$ equipped with the Lebesgue measure. Let $\nu$ be an infinite isotropic unimodal \emph{L\'evy measure} on $\Rd$ i.e. $\nu(dx)=\nu(|x|)dx$, where $(0,\infty)\ni r\mapsto \nu(r)$ is nonincreasing, and $$ \nu(\Rd\setminus\{0\})=\infty \quad \text{and} \quad \int_{\Rd\setminus\{0\}}(|x|^2\wedge 1) \ \nu(dx)<\infty. $$ Let \begin{equation}\label{LKfpsi} \psi(\xi)=\int_{\R^d} \left(1- \cos \left<\xi,x\right>\right) \nu(dx). \end{equation} Because of rotational symmetry, $\psi$ depends only on $|\xi|$, and we can write $\psi(r)=\psi(\xi)$ for $r=|\xi|$. This $\psi$ is almost increasing \cite{MR3165234}, {namely $\pi^2\psi(r)\ge \psi^*(r):=\sup\{\psi(p): 0\le p\le r\}$.} Let $0<\la \leq \ua < 2$. If $0 \not\equiv \psi\in\WLSC(\la, \lc)\cap \WUSC(\ua,\uc)$, then the following defines a convolution semigroup of functions, \begin{equation}\label{LKf} p_t(x)=(2\pi)^{-d}\int_{\R^d} e^{i\left<\xi, {x}\right>}e^{-t\psi(\xi)}d\xi,\quad t>0, x\in \Rd, \end{equation} and the next two estimates hold \cite[Theorem~21]{MR3165234}. \begin{align} p_t(x) &\approx \left[ \psi^{{-}}(1/t)\right]^{d}\wedge \frac{t\psi(1/|x|)}{|x|^d} ,\qquad t>0,\; x\in \Rd, \label{eqn:epnw} \\ \nu (|x|) &\approx \frac{\psi(1/|x|)}{|x|^d} ,\qquad x\in \Rd. \label{eqn:epnwj} \end{align} Here $\psi^{-}(u)=\inf\{s\ge 0: \psi^*(s)\ge u\}$, the left-continuous inverse of $\psi^*$. The corresponding Dirichlet form is \begin{align*} \EEE(u, u) &=(2\pi)^{d}\int_\Rd \hat{u}(\xi)\overline{\hat{v}(\xi)}\psi(\xi)\,d\xi= \frac12 \int_{\R^d} \int_{\R^d} (u(x)-u(y))^2 \nu(y-x)\,dy\,dx\\ &\approx \int_{\R^d} \int_{\R^d} (u(x)-u(y))^2 \frac{\psi(|x-y|^{-1})}{|y-x|^d} \,dy\,dx, \end{align*} cf. \cite[Example~1.4.1]{MR2778606} and the special case discussed in the proof of Proposition~\ref{cor:FS} above. \begin{cor}\label{cor:hardyphi} If $d > \ua$, then is $c >0$ such that for all $u\in L^2(\Rd)$ \begin{align}\label{eq:Hu} \EEE(u, u) & \ge c \int_{\R^d} u(x)^2 \psi(1/|x|)\,dx. \end{align} \end{cor} \begin{proof} Let $0 <\beta < (d /\ua)-1$, $h(x)=\int_0^{\infty} t^\beta p_t(x)\, dt$, and $k(x)=\int_0^{\infty} (t^\beta)' p_t(x)\, dt$. Considering $\rho(x,y)=|y-x|$, $\phi(r)=1/\psi(1/r)$ and $V(r)=r^d$, by \eqref{eq:hardyphi1} we get \eqref{eq:Hu} for all $u\in L^2(\Rd)$. To add some detail, we note that $\phi$ satisfies the same scalings as $\psi^*$ and $\phi^{-1}(t)=1/\psi^{-}(t^{-1})$. Thus \eqref{e:ah} yields $h(x) \approx \psi(|x|^{-1})^{-\beta-1} |x|^{-d}$ and $k(x) \approx \psi(|x|^{-1})^{-\beta} |x|^{-d}$. In fact, we actually obtain Hardy equality. Indeed, \begin{align} \liminf_{t\to 0} & \int_{\R^d} \int_{\R^d} \frac{p_t(x,y)}{2t} \left[\frac{u(x)}{ h(x)}-\frac{u(y)}{ h(y)} \right]^2 h(y) h(x)dydx \nonumber\\ &=\frac12 \int_{\R^d}\!\int_{\R^d} \left[\frac{u(x)}{ h(x)}-\frac{u(y)}{ h(y)}\right]^2 h(y) h(x)\nu(|x-y|) \,dy\,dx\,,\label{eq:remains1} \end{align} because if $t\to 0$, then $p_t(x,y)/t \leq c\nu (|x-y|)$ by \eqref{eqn:epnw} and \eqref{eqn:epnwj}, and $p_t(x,y)/t \to \nu (|x-y|)$ (weak convergence of radially monotone functions implies convergence almost everywhere), and we can use the dominated convergence theorem or Fatou's lemma, as in the proof of Proposition~\ref{cor:FS}. We thus have a strengthening of \eqref{eq:Hu} for every $u\in L^2(\Rd)$: \begin{align} \EEE(u, u) =&\int_{\R^d} u(x)^2 \frac{k(x)}{h(x)}\,dx\nonumber\\ &+\frac12 \int_{\R^d}\!\int_{\R^d} \left[\frac{u(x)}{ h(x)}-\frac{u(y)}{ h(y)}\right]^2 h(y) h(x)\nu(|x-y|) \,dy\,dx. \end{align} \end{proof} For instance if we take $\psi(\xi)=|\xi|\sqrt{\log(1+|\xi|)}$, the L\'evy-Kchintchine exponent of a subordinated Brownian motion \cite{MR2978140}, then we obtain \begin{equation*} \int_\Rd |\hat{u}(\xi)|^2 |\xi|\sqrt{\log(1+|\xi|)}d \xi \ge c \int_\Rd \frac{u(x)^2\sqrt{\log(1+|x|^{-1})}}{|x|}dx,\quad u\in L^2(\Rd). \end{equation*} \begin{rem} We note that \cite[Theorem 1, the ``thin'' case (T)]{DydaVahakangas} gives \eqref{eq:Hu} for continuous functions $u$ of compact support in $\Rd$. Here we extend the result to all functions $u\in L^2(\Rd)$, as typical for our approach. We note in passing that \cite[Theorem~1, Theorem~5]{DydaVahakangas} offers a general framework for Hardy inequalities without the remainder terms and applications for quadratic forms on Euclidean spaces. \end{rem} Here is an analogue of Remark~\ref{rem:ef}. \begin{rem}\label{rem:efL} Using the notation above, for every $0 <\beta < (d -\ua)/\ua$, there exist constants $c_1$, $c_2$ such that $$\int p_t(y-x) \psi(|y|^{-1})^{-\beta-1} |y|^{-d}dy \le c_1 \psi(|x|^{-1})^{-\beta-1} |x|^{-d}, \quad x\in \Rd, $$ and $$ \int \tilde p_t(x,z)dy)\psi(|y|^{-1})^{-\beta-1} |y|^{-d}dy \le c_1 \psi(|x|^{-1})^{-\beta-1} |x|^{-d}, \quad x\in \Rd, $$ where $\tilde p$ is given by \eqref{eq:pti} with $q(x)=c_2\psi(1/|x|)$ on $\Rd$. The result is proved as Remark~\ref{rem:ef}. In particular we obtain non-explosion of Schr\"odinger perturbations of such unimodal transition densities with $q(x)=c_2 \psi(1/|x|)$. Naturally, the largest valid $c_2$ is of further interest. \end{rem} \section{ Weak local scaling on Euclidean spaces}\label{sec:wls} In this section we restrict ourselves to the Euclidean space and apply Theorem~\ref{thm:Hardy} to a large class of symmetric jump processes satisfying two-sided heat kernel estimates given in \cite{MR2806700} and \cite{MR3165234}. Let $\phi: \R_+\to \R_+$ be a strictly increasing continuous function such that $\phi (0)=0$, $\phi(1)=1$, {and} $$ \lc \Big(\frac Rr\Big)^{\la } \, \leq \, \frac{\phi (R)}{\phi (r)} \ \leq \ \uc \Big(\frac Rr\Big)^{\ua } \quad \hbox{for every } 0<r<R \le 1. $$ Let $J$ be a symmetric measurable function on $\R^d\times \R^d \cap \{x\neq y\}$ and let $\kappa_1, \kappa_2$ be positive constants such that \begin{align}\label{e:bm1} \frac{\kappa_1^{-1}}{|x-y|^d \phi (|x-y|)} \le J(x, y) \le \frac{\kappa_1}{|x-y|^d \phi (|x-y|)}, \quad |x-y| \le 1, \end{align} and \begin{align}\label{e:bm2} \sup_{x\in \R^d} \int_{\{y\in \R^d: |x-y| > 1\}} J(x, y) dy =:\kappa_2 <\infty. \end{align} We consider the quadratic form $$\EEE(u,u)=\frac12\int_\Rd\int_\Rd [u(y)-u(x)]^2J(x,y)dydx, \qquad u\in L^2(\Rd,dx),$$ with the Lebesgue measure $dx$ as the reference measure, for the symmetric pure-jump Markov processes on $\R^d$ constructed in \cite{MR2524930} from the jump kernel $J(x, y)$. \begin{thm}\label{cor:hardyphi2} If $d \ge 3$, then \begin{equation}\label{eq:hardyphi2} \EEE(u, u) \ge c \int_{\R^d} u(x)^2 \frac{dx}{\phi (|x|) \vee |x|^2}, \qquad u\in L^2 (\R^d). \end{equation} \end{thm} \begin{proof} Let ${\mathcal{Q}}$ and $p_t(x,y)$ be the quadratic form and the transition density corresponding to the symmetric pure-jump Markov process in $\R^d$ with the jump kernel $J(x, y){\bf 1}_{\{ |x-y| \le 1\}}$ instead of $J(x, y)$, cf. \cite[Theorem 1.4]{MR2806700}. Thus, $$ {\mathcal{Q}} (u, u)=\frac12 \int_{\R^d\times \R^d} (u(x)-u(y))^2 J(x, y){\bf 1}_{\{ |x-y| \le 1\}} dx dy, \qquad u\in C_c(\R^d). $$ We define $h$ as $ h(x)= \int_0^{\infty} p_t(x, 0) t^\beta\, dt$, $x \in \R^d$ where $ {-1}<\beta < d/2-1$. We note that for every $T, M\ge 0$, \begin{align}\label{Gau} \int_{T}^{\infty} t^{\beta-\frac{d}2} e^{-\frac{Mr^2}{t}} dt ={r^{2\beta-d+2}} \int_0^{\frac{r^2}{T}} u^{-2-\beta+\frac{d}2} e^{-M u}du. \end{align} We shall use \cite[Theorem 1.4]{MR2806700}. We however note that the term $\log \frac{|x-y|}{t}$ in the statement of \cite[Theorem 1.4]{MR2806700} should be replaced by $1 + \log_+ \frac{|x-y|}{t}$, to include the case $c^{-1} t \le |x-y| \le c t$ missed in the considerations in \cite{MR2806700}. With this correction, our arguments are as follows. When $r =|x|\le 1$, we have \begin{align}\label{e:newq1} c_0^{-1}\left(\frac 1{\phi^{-1}(t)^d}\wedge \frac{t}{r^d\phi(r)}\right) \le p_t(x, 0) \le c_0\left(\frac 1{\phi^{-1}(t)^d}\wedge \frac{t}{r^d\phi(r)}\right), \quad t \in (0,1]\end{align} and \begin{align}\label{e:newq2} p_t(x, 0) \le c_0 \, t^{-d /2}e^{ -\uc \big( \big(r \, (\log_+ (\frac{r}{t}) +1) \big) \wedge \frac{r^2}t \big)} \le c_0 \, t^{-d /2}e^{ -\uc \frac{r^2}t }, \quad t >1. \end{align} Thus, by \eqref{e:newq1}, Lemma \ref{lem:tp}, \eqref{e:newq2}, \eqref{e:77}, \eqref{e:bm1} and \eqref{Gau}, \begin{eqnarray*} &&c_{3}\frac{\phi (r)^{\beta+1}}{r^d} \le c_0^{-1} \int_0^{ \phi (r)} \frac{t^{\beta+1}}{r^d\phi (r)}dt \le h(x)\\ &\le& c_0\int_0^{ \phi (r)} \frac{t^{\beta+1}}{r^d\phi (r)}dt + c_0 \int_{ \phi (r)}^{1} \frac {t^{\beta}}{(\phi ^{-1}(t))^d} dt+ c_0 \int_1^{\infty} t^{\beta-\frac{d}2} e^{-\frac{\uc r^2}{t}} dt\\ &\le& c_4\frac{\phi (r)^{\beta+1}}{r^d} + \frac{c_5}{r^{d-2-2\beta}} \int_0^{\infty} u^{-2-\beta+d/2} e^{-\uc u}du \\ &\le& c_6 \left({\phi (r)^{1+\beta}}+{r^{2+2\beta}} \right) r^{-d}. \end{eqnarray*} If $r=|x| > 1$, then by \cite[Theorem 1.4]{MR2806700}, we have \begin{align}\label{e:newq3} c_0^{-1} e^{-\lc r \, (\log_+ (\frac{r}{t}) +1) } \le p_t(x, 0)\le c_0 e^{-\uc r \, (\log_+ (\frac{r}{t}) +1) }, \quad t \in (0,1],\end{align} and for $t >1$ we have \begin{align}\label{e:newq4} c_0^{-1} \, e^{ -\lc \big( \big(r \, (\log_+ (\frac{r}{t}) +1) \big) \wedge \frac{r^2}t \big)} \le p_t(x, 0)/t^{-d /2} \le c_0 \, e^{ -\uc \big( \big(r \, (\log_+ (\frac{r}{t}) +1) \big) \wedge \frac{r^2}t \big)}. \end{align} In particular, \begin{align}\label{e:newq5} p_t(x, 0) \ge c_0^{-1} \, t^{-d /2}e^{ -\lc \big( \big(r \, (\log_+ (\frac{r}{t}) +1) \big) \wedge \frac{r^2}t \big)} \ge c_7 t^{-d /2}, \quad t >r^2. \end{align} Then \eqref{e:newq3}, \eqref{e:newq4}, \eqref{e:newq5}, \eqref{e:bm1} and \eqref{Gau} give \begin{eqnarray*} &&\frac{c_{7}}{r^{d-2-2\beta}} \int_0^{1} u^{-2-\beta+d/2} du = c_{7} \int_{r^2}^{\infty}t^{\beta-\frac{d}2} dt \le h(x)\\ &\le& c_{8}\int_0^{r} t^{\beta} e^{-c_{9}r} dt + c_{8} \int_{r}^{\infty} t^{\beta-\frac{d}2} e^{-\frac{c_{10} r^2}{t}} dt \\ &=& c_{8} (\beta+1)^{-1}r^{\beta+1} e^{-c_{9} r} + \frac{c_{8}}{r^{d-2-2\beta}} \int_0^{r} u^{-2-\beta+d/2}e^{-c_{10} u}du\\ &\le& \frac{c_{11}}{r^{d-2-2\beta}}. \end{eqnarray*} Thus, \[ h(x) \approx (\phi (|x|) \vee |x|^2)^{\beta+1} |x|^{-d}, \quad x\in \R^d. \] In particular, if $ {0}<\beta <d/2-1$, and $k(x)= \int_0^{\infty} p_t(x,0) (t^\beta)' \,dt$, then \[ k(x) \approx (\phi (|x|) \vee |x|^2)^{\beta}|x|^{-d}, \quad x\in \R^d. \] Therefore, \[ q(x) := \frac{k(x)}{h(x)} \approx \frac1{\phi (|x|) \vee |x|^2}. \] Theorem~\ref{thm:Hardy} yields \begin{equation}\label{eq:hardyphi2p} \EEE(u, u) \ge {\mathcal{Q}}(u, u) \ge c_{12} \int_{\R^d} u(x)^2 \frac{dx}{\phi (|x|) \vee |x|^2}, \quad u\in L^2(\R^d). \end{equation} \end{proof} \begin{rem}\label{rem:efLl} As in Remark~\ref{rem:efL} we obtain non-explosion for Schr\"odinger perturbations by $q(x)= c/[\phi (|x|) \vee |x|^2]$. \end{rem} \begin{rem} The arguments and conclusions of Theorem~\ref{cor:hardyphi2} are valid for the unimodal L\'evy processes, in particular for the subordinated Brownian motions, provided their L\'evy-Khintchine exponent $\psi$ satisfies the assumptions of local scaling conditions at infinity with exponents strictly {between $0$ and $2<d$} made in \cite[Theorem~21]{MR3165234}: \begin{align*}\label{eq:Hu1e} \int_\Rd |\hat{u}(\xi)|^2 \psi(\xi) d \xi &\ge c \int_{\R^d} u(x)^2 \left[\psi\left(\frac1{|x|}\right)\wedge \frac1{|x|^2}\right]\,dx, \qquad u\in L^2(\Rd). \end{align*} \end{rem}
0.003735
The purpose of this study is to determine if MGAH22 is safe when given by intravenous (IV) infusion to patients with HER2-positive cancer. The study will also evaluate how long MGAH22 stays in the blood and how long it takes for it to leave the body, what is the highest dose that can safely be given, and whether it has an effect on tumors.. Inclusion Criteria: Exclusion Criteria:
0.345571
\begin{document} \def\thefootnote{} \footnotetext{MGE was supported as a Senior Research Fellow of the Australian Research Council. PWM was supported by `Fonds zur F\"orderung der wissenschaftlichen Forschung, Projekt P~10037~PHY'. The authors would also like to thank the organizers of the Nineteenth Winter School on Geometry and Physics held in Srn\'\i, the Czech Republic, in January 1999, where discussions concerning this article were initiated.\\ This paper is in final form and no version of it will be submitted for publication elsewhere.} \begin{center}{\LARGE\bf Some Remarks on the Pl\"ucker Relations}\\[15pt] {\Large\bf Michael G. Eastwood and Peter W.\ Michor} \end{center} \section{The Pl\"ucker relations} Let $V$ denote a finite-dimensional vector space. An $s$-vector $P\in\Lambda^sV$ is called {\sl decomposable} or {\sl simple} if it can be written in the form $$P=u\wedge v\wedge\cdots\wedge w\quad\mbox{for }u,v,\ldots,w\in V.$$ We shall use in the following both Penrose's abstract index notation and exterior calculus with the conventions of \cite{G}. \begin{thm}\label{plueckerthm} Let $P\in\Lambda^sV$ be an $s$-vector. Then $P$ is decomposable if and only if one of the following conditions holds: \begin{enumerate} \item\label{pluecker} $i(\Phi)P\wedge P=0$ for all $\Phi\in \Lambda^{s-1}V^*$. In index notation $P_{[abc\cdots d}P_{e]fg\cdots h}=0$. \item\label{dualpluecker} $i(i_P\Psi)P=0$ for all $\Psi\in \Lambda^{s+1}V^*$. \item\label{contraction} $i_{\alpha_1\wedge\dots\wedge\alpha_{s-k}}P$ is decomposable for all $\alpha_i\in V^*$, for any fixed $k\ge 2$. \item\label{improvedpluecker} $i(\Psi)P\wedge P=0$ for all $\Psi\in \Lambda^{s-2}V^*$ In index notation $P_{[abc\cdots d}P_{ef]g\cdots h}=0$. \item\label{dualimprovedpluecker} $i(i_P\Psi)P=0$ for all $\Psi\in \Lambda^{s+2}V^*$. \end{enumerate} \end{thm} \begin{proof} (\ref{pluecker}) These are the well known classical Pl\"ucker relations. For completeness' sake we include a proof. Let $P\in\Lambda^n V$ and consider the induced linear mapping $\sharp_P:\Lambda^{s-1}V^*\to V$. Its image, $W$, is contained in each linear subspace $U$ of $V$ with $P\in\Lambda^s U$. Thus $W$ is the minimal subspace with this property. $P$ is decomposable if and only if $\dim W= s$, and this is the case if and only if $w\wedge P=0$ for each $w\in W$. But $i_\Phi P$ for $\phi\in \Lambda^{s-1}V^*$ is the typical element in $W$. (\ref{dualpluecker}) This well known variant of the Pl\"ucker relations follows by duality (see \cite{GH}): \begin{align*} \langle P\wedge i(\Phi)P, \Psi \rangle &= \langle i(\Phi)P, i_P\Psi \rangle = \langle P, \Phi\wedge i_P\Psi \rangle = \\ &=(-1)^{(s-1)}\langle P, i_P\Psi\wedge \Phi \rangle = (-1)^{(s-1)}\langle i(i_P\Psi)P, \Phi \rangle. \end{align*} (\ref{contraction}) This is due to \cite{MV}. There it is proved using exterior algebra. Apparently, this result is included in formula (4), page 116 of \cite{W}. (\ref{improvedpluecker}) Another proof using representation theory will be given below. Here we prove it by induction on $s$. Let $s=3$. Suppose that $i_\alpha P\wedge P = 0$ for all $\alpha\in V^*$. Then for all $\beta\in V^*$ we have $ 0 = i_\beta(i_\alpha P\wedge P) = i_{\alpha\wedge\beta}P\wedge P + i_\alpha P\wedge i_\beta P$. Interchange $\alpha$ and $\beta$ in the last expression and add it to the original, then we get $0 = 2i_\alpha P\wedge i_\beta P$ and in turn $i_{\alpha\wedge\beta}P\wedge P= 0$ for all $\alpha$ and $\beta$, which are the original Pl\"ucker relations, so $P$ is decomposable. Now the induction step. Suppose that $P\in\Lambda^s V$ and that $i_{\alpha_1\wedge\dots\wedge\alpha_{s-2}}P\wedge P = 0$ for all $\alpha_i\in V^*$. Then we have $$0= i_{\alpha_1}(i_{\alpha_1\wedge\dots\wedge\alpha_{s-2}}P\wedge P) = i_{\alpha_1\wedge\dots\wedge\alpha_{s-2}}P\wedge i_{\alpha_1}P = i_{\alpha_2\wedge\dots\wedge\alpha_{s-2}} (i_{\alpha_1}P)\wedge (i_{\alpha_1}P)$$ for all $\alpha_i$, so that by induction we may conclude that $i_{\alpha_1}P$ is decomposable for all $\alpha_1$, and then by (\ref{contraction}) $P$ is decomposable. (\ref{dualimprovedpluecker}) Again this follows by duality. \end{proof} Let us note that the following result (Lemma 1 in \cite{GM}), a version of the `three plane lemma' also implies (\ref{contraction}): Let $\{P_i: i\in I\}$ be a family of decomposable non-zero $k$-vectors in $V$ such that each $P_i+P_j$ is again decomposable. Then \begin{enumerate} \item[(a)] either the linear span $W$ of the linear subspaces $W(P_i)=\operatorname{Im}(\sharp_{P_i})$ is at most $(k+1)$-dimensional \item[(b)] or the intersection $\bigcap_{i\in I}W(P_i)$ is at least $(k-1)$-dimensional. \end{enumerate} Finally note that (\ref{pluecker}) and (\ref{improvedpluecker}) are both invariant under ${\mathrm{GL}}(V)$. In the next section we shall decompose (\ref{pluecker}) into its irreducible components in this representation. If $\dim V$ is high enough in comparison with $s$, then (\ref{improvedpluecker}) seemingly comprises less equations. \section{Representation theory} In order efficiently to analyse (\ref{pluecker}) and (\ref{improvedpluecker}) it is necessary to take a small excursion through representation theory. An extensive discussion of Young tableau may be found in~\cite{fulton}. Here we shall just need $$Y^{s,t}\equiv\raisebox{-30pt}{\begin{picture}(60,80) \put(20,0){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(20,30){\line(1,0){10}} \put(20,40){\line(1,0){20}} \put(20,50){\line(1,0){20}} \put(20,70){\line(1,0){20}} \put(20,80){\line(1,0){20}} \put(20,0){\line(0,1){80}} \put(30,0){\line(0,1){80}} \put(40,40){\line(0,1){40}} \put(25,23){\makebox(0,0){$\vdots$}} \put(25,63){\makebox(0,0){$\vdots$}} \put(35,63){\makebox(0,0){$\vdots$}} \put(5,40){\makebox(0,0){$s$}} \put(55,60){\makebox(0,0){$t$}} \put(15,40){\makebox(0,0){$\left\{\rule{0pt}{44pt}\right.$}} \put(45,60){\makebox(0,0){$\left.\rule{0pt}{22pt}\right\}$}} \end{picture}}$$ regarded as irreducible representations of ${\mathrm{GL}}(V)$. Then, as special cases of the Littlewood-Richardson rules, we have $$\begin{array}{ccr} \Lambda^sV\otimes\Lambda^sV&=&Y^{s,s}\oplus Y^{s+1,s-1}\oplus Y^{s+2,s-2}\oplus Y^{s+3,s-3} \oplus\cdots\oplus Y^{2s,0}\\[3pt] \Lambda^{s+1}\otimes\Lambda^{s-1}V&=& Y^{s+1,s-1}\oplus Y^{s+2,s-2}\oplus Y^{s+3,s-3} \oplus\cdots\oplus Y^{2s,0}\\[3pt] \Lambda^{s+2}\otimes\Lambda^{s-2}V&=& Y^{s+2,s-2}\oplus Y^{s+3,s-3} \oplus\cdots\oplus Y^{2s,0} \end{array}$$ and from the first two of these (\ref{pluecker}) says that $P\otimes P\in Y^{s,s}$. In fact, $$(\star\star)\qquad\begin{array}[t]{cccccccc} \Lambda^sV\odot\Lambda^sV &=&Y^{s,s}&\oplus&Y^{s+2,s-2}&\oplus&\cdots\\[3pt] \Lambda^sV\wedge\Lambda^sV&=&&Y^{s+1,s-1}&\oplus&Y^{s+3,s-3}&\oplus&\cdots \end{array}$$ so we can also see the equivalence of (\ref{pluecker}) and (\ref{improvedpluecker}) without any calculation. Having decomposed $\Lambda^sV\odot\Lambda^sV$ into irreducibles, it behoves one to investigate the consequences of having each irreducible component of $P\otimes P$ vanish separately. The first of these gives us another improvement on the classical Pl\"ucker relations: \begin{thm}\label{optimal} An $s$-form $P$ is simple if and only if the component of $P\otimes P$ in $Y^{s+2,s-2}$ vanishes.\end{thm} \begin{proof}The representation $Y^{s+2,s-2}$ may be realised as those tensors $$T_{a_1b_1a_2b_2\ldots a_{s-2}b_{s-2}cdef}$$ which are symmetric in the pairs $a_jb_j$ for $j=1,2,\ldots s-2$, skew in $cdef$, and have the property that symmetrising over any three indices gives zero. The corresponding Young projection of $$P_{a_1a_2\ldots a_{s-2}cd}P_{b_1b_2\ldots b_{s-2}ef}$$ is obtained by skewing over $cdef$ and symmetrising over each of the pairs $a_jb_j$ for $j=1,2,\ldots,s-2$. Its vanishing, therefore, is equivalent to the vanishing of $$Q_{[cd}Q_{ef]}\quad\mbox{where }Q_{cd}= \alpha^{a_1}\beta^{a_2}\cdots\gamma^{a_{s-2}}P_{a_1a_2\ldots a_{s-2}cd}$$ for all $\alpha^a,\beta^a,\ldots,\gamma^a\in V^\ast$. According to~(\ref{improvedpluecker}), this means that $Q_{cd}$ is simple. Therefore, the theorem is equivalent to criterion (\ref{contraction}) of Theorem~\ref{plueckerthm}. \end{proof} Notice that this generally cuts down further the number of equations needed to characterise the simple $s$-vectors. The simplest instance of this is for 4-forms: $P$~is simple if and only if $$P_{[abcd}P_{ef]gh}=P_{[abcd}P_{efgh]}.$$ Written in this way, it is slightly surprising that one can deduce the vanishing of each side of this equation separately. Theorem~\ref{optimal} is optimal in the sense that the vanishing of any other component or components in the irreducible decomposition ($\star\star$) of $P\otimes P$ is either insufficient to force simplicity or causes $P$ to vanish. In the case of four-forms, for example, $$P_{[abcd}P_{efgh]}=0$$ if $P=v\wedge Q$ for some vector $v$ and three-form~$Q$. On the other hand, if the $Y^{4,4}$ component of $P\otimes P$ vanishes, then arguing as in the proof of Theorem~\ref{optimal} shows that $P=0$.
0.010403
BLOOM. Survivors in addition to his mother include: two brothers, Donald Watson of Bloomfield and Samuel Watson of Albertville, Ala.; one sister, Sherry Barker of Bloomfield; and a special friend, Sue Gray of Bloomfield. He was preceded in death by an infant son, Eric Lynn Watson. Visitation is from 6 to 8 p.m. today at Chiles-Cooper Funeral Home in Bloomfield, where services will be conducted at 1 p.m. Thursday. Burial will follow in the Bloomfield Cemetery.
0.01348
SEO Newtownabbey If you are looking for an SEO company to rank your Newtownabbey business, we're here to help you. We can help push your webpages up the Google search results. So, if you're looking for SEO Newtownabbey has the answer! Search Authority are a leading UK SEO agency providing SEO services to organisations in Newtownabbey, County Antrim and across the UK. We specialise in search engine optimisation and have over 20 years experience in ranking businesses such as yours in the local and national Google search engine results pages. Helping Newtownabbey businesses thrive online We help businesses of all sizes get their website ranking on the first page on Google. This drives more qualified traffic to your website which converts to paying customers. This is where some Newtownabbey Newtownabbey business reach the top spot in Google. You may have searched for something like SEO company in Newtownabbey, Newtownabbey SEO services or even Newtown Newtownabbey Our SEO services for Newtowntownabbey SEO Agency? Search engine optimisation for your Newtownabbey organisation is not as easy as some people may think it is. Finding a reputable seo company that delivers effective results that last can be a daunting task. We get it. We believe that we are the best placed SEO agency to deliver results in County Antrim and here's why: We Deliver Results We have worked with hundreds of websites over the last 20 years, many in the Newtowntownabbey companies Search engine optimisation is essential for Newtownabbeytownabbeytownabbey and you are ready to talk SEO, just drop us a line. It really is that simple.
0.290042
Presentation of the range on website In order to enter the Canadian market, we need a revised bilingual website that is optimised for this market. With a relaunch of the website in English and French, potential customers should find the DESKO product world when searching for ID document scanner solutions online and be guided to the solution they are looking for on the website. This project is financially supported by the European Regional Development Fund EFRE. through optimized marketing measures To open up the Chinese market, it is necessary to optimize a Chinese language website for the Baidu search engine and to be present on WeChat. WeChat is used as an all-in-one messenger app to build and strengthen business relationships. In this way, we create the communicative prerequisites to present our products in China to a larger audience. This project is financially supported by the EFRE. As with many companies, we also receive questions from customers and partners about the Corona crisis on a daily basis. In the following statement, we have therefore summarized the most important information on delivery capability, accessibility, etc. Read the complete statement: click here A highly efficient all-in-one solution: Two leading Bavarian innovators pool their expertise! Airports have to implement corona rules of conduct consistently. An important part: DESKO product solutions for contactless processes from check-in to boarding. It's not easy to bring product innovations to the market in the middle of a global crisis. All the more gratifying is the strong market entry of the new DESKO ICON Scanner®. Meet us at the air transport industry’s first major digital trade show!.
0.650091
Pyramid Egypt slot Your unlock guidelines mightn’t get sent presently, be sure to attempt after a lot more subsequently. Sons of Slots reserves the right to change our offer, plus its guidelines in whatever time. Their Freespins will stay claimable for the twenty four hours from the time these have always been credited as well as is performed in 24 hours from the occasion they have become claimed. Performing Pyramids of Fortune 9 slot you’ll find wild symbols, complimentary spins , cascading mechanics, multipliers, amassing symbols as well as, out of program, quite good payouts. Please touch their screen as quickly since a point concerning their Title appears. It will skip the message and near this immediately. Enjoy exclusively comprehensive type video games, little trials, no time restrictions. Please join the totally free account in order to bring our video game for TOTALLY FREE. This action takes we in order to one elder adaptation concerning the iWin.com net site and also video games Manager, what supports reports and subscriptions developed previously versus October 2017. Cost This Athletics Firstly, as substitute of stressing about clearing the pyramid at their number one line to the bottom, decide inside try towards eliminate each sides evenly. Our does present most options for the combinations as the sport progresses. Cross 3 historic kingdoms and their own distinct surroundings and tips and also techniques to be uncovered because you make your means all the means straight down inside your leave. People aid many impartial builders towards produce additional and best video video games. Sorry, a person can not re-use a password you’ve have got already used. You should enter your password towards posses your ability towards save whatever alterations. Lead a gaggle out of experts simply by means out of addicting term activities in that the Saharan desert! Use their wits and language to maintain your self one step ahead plus endure that the blistering solar. Pyramid Solitaire was applied as inspiration of Giza Solitaire. If you want acting Pyramid Solitaire Ancient Egypt, you could always have a appearance atTriPeaks Solitaire, which is also inside their Incorporating & Pairing solitaire movie video games category. Exactly What Tend To Be Preferred Pyramid Games? Their Book out of woman slot has 5 reels and… Drill your silver is a exciting slot fun where we should get into their my own to draw out gold plus ones winnings…. Fishin’ Frenzy That the Big Catch Megaways is actually a fishing themed slot with six reels and also from sixty-four towards ways to profit…. our looks the best licensed slot device that produces random outcomes, and all we want is actually excellent effective fortune. One can easily match cards dependent at their face worth in order to take away them. And standard recommendations, that the face cards tend to be valued as follows. In the beginning of game, you’ll show the top card of the draw stack.
0.004136
TITLE: Question about homomorphism of cyclic group QUESTION [1 upvotes]: if $\varphi: G\to H$ is homomorphism. How do I prove that if $a\in G$ have finite order so $\varphi(a)$ had finite order to, and that:$$ord(\varphi(a))\mid ord(a)$$ Thank you! REPLY [2 votes]: Saying that $a$ has finite order means that $$ \langle a\rangle=\{a^n:n\in\mathbb{Z}\} $$ is a finite subgroup of $G$. Moreover the order of $a$ is exactly $|\langle a\rangle|$. The homomorphism $\varphi\colon G\to H$ induces a surjective homomorphism $$ \phi'\colon \langle a\rangle\to\langle \varphi(a)\rangle $$ because $\phi(a^k)=(\phi(a))^k$ for each integer $k$. Thus the order of $\varphi(a)$ is finite and from the homomorphism theorem you get that $$ \langle \varphi(a)\rangle\cong\langle a\rangle/\ker\varphi' $$ so the order of $\langle \varphi(a)\rangle$ divides the order of $\langle a\rangle$, which is what you wanted to prove.
0.014306
Snowmobiling | Killington Things to Do When you’re planning your stay in a winter wonderland like Killington, there’s nothing quite like exploring the mountainside on top of a powerful snowmobile, taking in the beautiful scenery of snow-blanketed trees as it all rushes by. Taking a snowmobiling tour allows you to not only experience this sensation, but also to learn more about the area. Below, we have put together a few pointers for going on a snowmobiling tour in Killington. Enjoy! Feel the Thrill of a High-Octane Ride Through the Wilderness With one of its most popular locations based out of Killington, Snowmobile Vermont is one of the largest and longest-running purveyors of snowmobile tours in the state of Vermont. Utilizing a fleet of cutting-edge Polaris brand snowmobiles, Snowmobile Vermont offers two pre-set tours that visitors can take, as well as custom-made tours tailored to the kind of experience you want. Among Killington things to do, snowmobiling has to be one of the funnest. Suitable for riders of all ages and snowmobiling experience levels, Snowmobile Vermont’s tours take snowmobilers on scenic ski trails, as well as through the National Forest. There are also kid-friendly tours for children ages four to eleven years old which include monitoring by your guide. Snowmobile riders are also provided with all of the gear they need in order to ride safely and comfortably, including helmets and boots. Warm winter clothing, such as ski suits, can be rented for a nominal fee. The Mountain Tour takes visitors on an hour-long adventure on groomed ski trails that lead through the woods, especially after the skiing trails have closed for the day. Alternatively, tour-goers can go on the two-hour Backcountry Tour, which takes riders on a journey covering twenty-five miles of varied terrain on the Vermont State trail system’s Calvin Coolidge State Forest trails. Meanwhile, parents can send their children on the Kids’ Tour, during which their guide watches them and assists them with operating their Polaris Indy 120 XCs, a kid-sized snowmobile. Snowmobile Vermont’s tour pricing is set at $89.00 for renting a mini-snowmobile on the Kids Snowmobile Tour (though family members of all ages are invited to join). The Mountain Tour costs $69.00 for a passenger to ride with a guide, $99.00 for a single-seater snowmobile rental, and $139.00 for a double-seater snowmobile rental. Finally, the Backcountry Tour is priced at $99.00 for a passenger with guide, $154.00 for a single snowmobile, and $199.00 for a double snowmobile. Experience the Best of Killington Things to Do with GetAway Vacations Contact our office to find out more about these snowmobiling tours and other fantastic Killington things to do when you book your stay in one of our awesome vacation rentals today! - Rentals By Type - All Rentals - Vacation Homes - Condos - Cabins - Pet-Friendly
0.59593
Less. You are cut to the heart when you realize that it was your sin that put Jesus on the cross; when you see Jesus looking at you. It was your refusal to submit to God’s authority, your rebellious attitude to your parents, and your insistence on always doing things your way for which Jesus died.. Leave a comment below.
0.032645
Sales executives are the most important people in the organization when it comes to expanding markets, driving sales, and increasing revenue. To achieve these ambitious targets, sales executives establish a new business, negotiate contracts, review sales performance, and develop marketing campaigns. However, to successfully advance your executive sales career, you need to master one more crucial skill – cover letter writing. Your cover letter is the first document the employer sees, so consider it as a sales pitch. Ideally, your letter should spark the interest and encourage the hiring manager to read your resume top to bottom, and then invite you for an interview. And this target is achievable provided that you take that cover letter writing seriously. In today’s guide, our resume writers online will explain as follows: A cover letter is your introduction to a target employer. Not only it will be compared against the letters of your competitors, but also your professional competencies, career history and communication skills will be evaluated. With this in mind, you want a flawless, error-free letter that clearly articulates your professional value. The resume wizards of Resumeperk.com can prepare a compelling custom cover letter that humblebrags about what makes you a perfect fit for that sales executive or VP of Sales role. The writer will work on your letter until you are fully satisfied. The whopping 45% of recruiters will reject a resume that goes without an attached cover letter. The situation is more serious with the C-level position. The cost of the wrong hire is higher; hence the recruiters are pickier about the candidates. Since your sales role implies heavy interaction with high-profile clients, relationship building and communication, the hiring manager can easily see these qualities – or lack thereof – from your cover letter. A focused, concise, error-free letter increases your chances for an interview dramatically. The essential elements of any cover letter include contact details, introduction, body, and closing. Your C-level letter is no exception from this rule. Here are a few guidelines for you to follow: • Address the hiring person by name. A student or entry-level professional may write ‘Dear Hiring Manager’, but as a leader, you are expected to have better etiquette. • Show how you learned about the position. It’s better to avoid saying ‘I found the listing on a job board’. The best strategy is to show your connection with the company. For instance, you might say that a friend working for them recommended you to apply, or that you’ve spoken to someone during the conference and heard that the company was hiring new sales teams. • Focus on accomplishments. As a leader, you will need to deliver tangible results. So, show how you gained previous results in the past – for example, increased market share, surpassed sales quotas, grew customer base, and built high-performing teams. You might also mention whether you encourage teamwork or individual work with your subordinates. • Close confidently. It takes confidence and even aggressiveness to a certain extent to succeed in sales. So, keep the tone of your letter confident and persuasive. In the closing, say that you will follow up about the progress of your application and express the confidence that you’re an ideal fit for the opening. In addition to listing your previous experience and accomplishments, your letter should also showcase the skills that the employers look for in successful sales leaders. The specific skills you’ll want to mention depend on the particular position and your strengths. If you don’t know which skills to focus on, here are some suggestions for you: • Persuasion and negotiation • Decision-making • Strategic planning • Marketing campaign development • New business development • Brand awareness • Account management • Client acquisition and retention • Team building and motivation • ROI analysis While you may want to list these skill names in a resume, avoid doing so in a cover letter. Instead, give the real situation when you applied these skills and how it helped you achieve the goals set. Being a sales leader is stressful, so take a look at these ways to eliminate stress at work as recommended by our experts. Want to obtain a sales executive position in the tech industry? Here are some tips to help you out:. Writing a persuasive cover letter that will wow your target employer is a tough task. To help you out, we’ve collected some extra writing tips: Do’s: Are you returning back to the office after maternity leave? Follow these expert tips to make the transition easy:. Don’ts Now that you know the principles of effective writing, let’s take a look at the letter example. This letter will give you an idea of how a successful letter should look like, the overall letter structure and the tone of voice. However, avoid copying the parts or the entire letter, as your own letter should be unique and based on your experience. Image source: As you see, this candidate successfully highlights their professional accomplishments and recognition, explaining which steps they took to meet their ambitious goals. The letter uses persuasive and aggressive tone of voice as recommended for sales executives. It also uses bullet points to draw attention to the results the candidate delivered in the first place. Once you’ve made it to the interview stage, you’ll need to master interview skills to make an excellent impression in person. Here are some pointers for you to get perfectly prepared and ace your next interview: Tailor your cover letter for the job posting, using the same language and highlighting the skills and attributes that the employer requests for in the first place. Being highly relevant will ease your way through the ATS and build a connection with a human recruiter. Now that you know how to craft a compelling cover letter, it’s time to make sure that your resume is up to scratch. Our website offers a free resume evaluation for professional and executives. Email your resume to us, and one of our resume makers will describe its strengths and areas for improvement to you. Looking to get professional executive resumes right now? Seek no further: we offer resume writing at a competitive price. Ongoing communication with your writer and unlimited revisions are included. Order today to get a 20% off your first order with us. Subscribe now and receive information about our services. Are you ready to place your order now and get a serious discount for the first order? If you still hesitate, you can contact our support team.Place your order Chat with us
0.992863
TITLE: Comparison of capacity of sets in $\mathbb{R}^n$ QUESTION [1 upvotes]: This is mainly in reference to this MSE post. Let $B_r \subset \mathbb{R}^n$ denote the ball of radius $r$ centered at the origin. Consider any set $F \subset B_1$. For all sets $\Omega \subset \mathbb{R}^n$ such that $F \subset \Omega$, define $2$-capacity by $$cap_2 (F, \Omega) = \inf_{\phi|_F = 1, \phi \in C^\infty_0(\Omega)}\int_\Omega |\nabla \phi|^2 dV$$ It is clear that $cap_2(F, B_2) \geq cap_2(F, B_3)$. The answer there by soup proves that we have a constant $C$ such that $cap_2(F, B_2) \leq Ccap_2(F, B_3)$. This is done by considering a diffeomorphism $F$ between $B_2$ and $B_3$ such that both $F$ and $F^{-1}$ have bounded derivatives. My question, is what happens if we substitute $B_3$ for the whole space $\mathbb{R}^n$? Could we still say that $cap_2(F, B_2) \leq Ccap_2(F, \mathbb{R}^n)$? REPLY [2 votes]: Yes, this still holds, and I would prove it in a different (maybe equivalent) way. The key is the existence of $\psi \in C_0^\infty(B_3)$ with $\psi \equiv 1$ on $B_1$. Then, for all $\varepsilon > 0$, there is $\phi_\varepsilon \in C_0^\infty(B_2)$ with $\phi_\varepsilon \equiv 1$ on $F$ and $$\int |\nabla \phi_\varepsilon|^2 \, \mathrm{d}V \le cap_2(F,B_2) + \varepsilon.$$ Hence, $\psi \cdot \phi_\varepsilon \in C_0^\infty(B_3)$ and $\psi \cdot \phi_\varepsilon \equiv 1$ on $F$. Moreover, there is a constant $C > 0$ (independent of $\varepsilon$) with $$\int|\nabla(\psi \cdot \phi_\varepsilon)|^2 \, \mathrm{d}V \le C \, \int|\nabla\phi_\varepsilon|^2 \, \mathrm{d}V.$$ Thus, $$cap_2(F,B_3) \le \int|\nabla(\psi \cdot \phi_\varepsilon)|^2 \, \mathrm{d}V \le C \, (cap_2(F,B_2) + \varepsilon).$$ The limit $\varepsilon \to 0$ finishes the inequality. By the way: I think your definition of capacity is slightly wrong. Indeed, the set $\mathbb{Q}^n \cap B_1$ is countable and, thus, should have capacity zero (for $n > 1$). But with your definition, the capacity is $+\infty$. May I ask, where do you found this definition?
0.005365
\begin{document} { \title{\paperTitle} \author{ Howard H. Yang, \textit{Member, IEEE}, Zihan~Chen, \textit{Student Member, IEEE}, and Tony Q. S. Quek, \textit{Fellow, IEEE} \thanks{H. H. Yang is with the Zhejiang University/University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China, the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310007, China, and the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA (email: haoyang@intl.zju.edu.cn).} \thanks{Z. Chen and T.~Q.~S.~Quek are with the Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore (e-mail: zihan\_chen@mymail.sutd.edu.sg, tonyquek@sutd.edu.sg).} } \maketitle \acresetall \thispagestyle{empty} \begin{abstract} We demonstrate that merely analog transmissions and match filtering can realize the function of an edge server in federated learning (FL). Therefore, a network with massively distributed user equipments (UEs) can achieve large-scale FL without an edge server. We also develop a training algorithm that allows UEs to continuously perform local computing without being interrupted by the global parameter uploading, which exploits the full potential of UEs' processing power. We derive convergence rates for the proposed schemes to quantify their training efficiency. The analyses reveal that when the interference obeys a Gaussian distribution, the proposed algorithm retrieves the convergence rate of a server-based FL. But if the interference distribution is heavy-tailed, then the heavier the tail, the slower the algorithm converges. Nonetheless, the system run time can be largely reduced by enabling computation in parallel with communication, whereas the gain is particularly pronounced when communication latency is high. These findings are corroborated via excessive simulations. \end{abstract} \begin{IEEEkeywords} Federated learning, wireless network, analog over-the-air computing, zero-wait training, convergence rate. \end{IEEEkeywords} \acresetall \section{Introduction}\label{sec:intro} \subsection{Motivation} A federated learning (FL) system \cite{MaMMooRam:17AISTATS,LiSahTal:20SPM,ParSamBen:19} generally consists of an edge server and a group of user equipments (UEs). The entities collaboratively optimize a common loss function. The training process constitutes three steps: ($a$) each UE conducts local training and uploads the intermediate parameters, e.g., the gradients, to the server, ($b$) the server aggregates the gradients to improve the global model, and ($c$) the server broadcasts the global parameter back to the UEs for another round of local computing. This procedure repeats until the model converges. However, edge services are yet widely available in wireless networks as deploying such computing resources on the access points (APs) is costly to the operators. And even if possible, using the precious edge computing unit to perform global model improvement -- which are simply additions and/or multiplications -- results in significant resource underuse. That leads to a natural question:\\[-2.5mm] \newline \textbf{Question-1:} \textit{Can we run FL in a wireless network without an edge server while maintaining scalability and efficiency?}\\[-2.5mm] A possible strategy is to change the network topology from a star connection into a decentralized one \cite{LiCenChe:20AISTATS,ChePooSaa:20CMAG}. In this fashion, every UE only exchanges intermediate parameters with its geographically proximal neighbors in each communication round. If the network is fully connected, i.e., any pair of UEs can reach each other via finite hops, the training algorithm is able to eventually converge. Nonetheless, such an approach bears two critical setbacks: ($i$) the communication efficiency is low because UEs' parameters can only be exchanged within local clusters in each global iteration, which results in a large number of communication rounds before the model can reach a satisfactory performance level; and ($ii$) the privacy issue is severe, as UEs may send their information to a deceitful neighbor without the authentication from a centralized entity. Therefore, completely decentralizing the network is not a desirable solution to the posed question. Apart from putting the server in petty use, another disadvantage of the conventional FL algorithm is that once the local parameters are uploaded to the edge, UEs need to wait for the results before they can proceed to the next round of local computing. Since the UEs are obliged to freeze their local training during each global communication, where the latter can be orders of magnitudes slower than the former \cite{LanLeeZho:17}, the system's processing power is highly underutilized. As such, the second question arises: \\[-2.5mm] \newline \textbf{Question-2:} \textit{Can UEs continue their local computing during global communication and use these extra calculations to reduce the system run time?} \subsection{Main Contributions} In light of the above challenges, we propose a new architecture, as well as the model training algorithm, that ($a$) attains a similar convergence rate of FL under the master-slave framework but without the help of an edge server and ($b$) allows local computations to be executed in parallel with global communication, therefore enhance the system's tolerance to high network latency. The main contributions of the present paper are summarized as follows: \begin{itemize} \item We develop a distributed learning paradigm that in each communication round, allows all the UEs to simultaneously upload and aggregate their local parameters at the AP without utilizing an edge server, and later use the global model to rectify and improve the local results. This is accomplished through analog gradient aggregation \cite{SerCoh:20TSP} and replacing the locally accumulated gradients with the globally averaged ones. \item We derive the convergence rate for the proposed training algorithm. The result reveals that the convergence rate is primarily dominated by the level of heavy tailedness in the interference's statistical distribution. Specifically, if the interference obeys a Gaussian distribution, the proposed algorithm retrieves the convergence rate of a conventional server-based FL. When the interference distribution is heavy-tailed, then the heavier the tail, the slower the algorithm converges. \item We improve the developed algorithm by enabling UEs to continue their local computing in concurrence with the global parameter updating. We also derive the convergence rate for the new scheme. The analysis shows that the proposed method is able to reduce the system run time, and the gain is particularly pronounced in the presence of high communication latency. \item We carry out extensive simulations on the MNIST and CIFAR-10 data set to examine the algorithm under different system parameters. The experiments validate that SFWFL achieves a similar, or even outperforms the convergence rate of a server-based FL, if the interference follows a Gaussian distribution. It also confirms that the convergence performance of SFWFL is sensitive to the heavy tailedness of interference distribution, where the convergence rate deteriorates quickly as the tail index decreases. Yet, as opposed to conventional FL, under the SFWFL framework, an increase in the number of UEs is instrumental in accelerating the convergence. And the system run time is shown to be drastically reduced via pipelining computing with communication. \end{itemize} \begin{figure*}[t!] \centering \subfigure[\label{fig:1a}]{\includegraphics[width=0.95\columnwidth]{Figures/FL_CvnlSys_v1.eps}} ~~~ \subfigure[\label{fig:1b}]{\includegraphics[width=0.95\columnwidth]{Figures/FL_SFSys_v1.eps}} \caption{ Examples of ($a$) server-based and ($b$) server free FL systems. In ($a$), the edge server aggregates the gradients from a portion of the associated UEs to improve the global model. In ($b$), all the UEs concurrently send analog functions of their gradients to the AP, and the AP passes the received signal to a bank of match filters to obtain the automatically aggregated (but noisy) gradients. } \label{fig:FL_Systems_comparison} \end{figure*} \subsection{Outline} The remainder of this paper is organized as follows. We survey the related works in Section~II. In Section~III, we introduce the system model. We present the design and analysis of a server-free FL paradigm in Section~IV. We develop an enhanced version of the training algorithm in Section~V, that allows UEs to execute local computations in parallel with global communications. Then, we show the simulation results in Section VI to validate the analyses and obtain design insights. We conclude the paper in Section VII. In this paper, we use bold lower case letters to denote column vectors. For any vector $\boldsymbol{w} \in \mathbb{R}^d$, we use $\Vert \boldsymbol{w} \Vert$ and $\boldsymbol{w}^{\mathsf{T}}$ to denote the $L$-2 norm and the transpose of a column vector, respectively. The main notations used throughout the paper are summarized in Table~I. \section{Related Works} The design and analysis of this work stem from two prior arts: \textit{Analog gradient descent} and \textit{delayed gradient averaging}. In the following, we elaborate on these two aspects' related works. \subsubsection{Analog gradient descent} This method capitalizes on the superposition property of electromagnetic waves for fast and scalable FL tasks \cite{ZhuWanHua:19TWC,GuoLiuLau:20,SerCoh:20TSP,YanCheQue:21JSTSP}: Specifically, during each global iteration, the edge server sends the global parameter to all the UEs. After receiving the global parameter, each UE conducts a round of local computing and, once finished, transmits an analog function of its gradient using a set of common shaping waveforms, one for each element in the gradient vector. The edge server receives a superposition of the analog transmitted signals, representing a distorted version of the global gradient. The server then updates the global model and feedbacks the update to all the UEs. This procedure repeats for a sufficient number of rounds until the training converges -- the convergence is guaranteed if the loss function has nice structures (i.e., strong convexity and smoothness), even if the aggregated parameters are severely jeopardized by channel fading and interference noise \cite{YanCheQue:21JSTSP}. The main advantage of analog gradient descent is that the bandwidth requirement does not depend on the number of UEs. As a result, the system not only scales easily but also attains significant energy saving \cite{SerCoh:20TSP}. Moreover, the induced interference noise can be harnessed for accelerating convergence \cite{ZhaWanLi:22TWC}, enhancing privacy \cite{ElgParIss:21}, efficient sampling \cite{LiuSim:21}, or improving generalization \cite{YanCheQue:21JSTSP}. In addition to these benefits, the present paper unveils another blessing from the analog gradient descent, that using this method, we can get rid of the edge server -- as the old saying goes, ``Render unto Caesar the things which are Caesar's, and unto God the things that are God's.'' \subsubsection{Delayed gradient averaging} On a separate track, delayed gradient averaging \cite{ZhuLinLu:21} is devised by recognizing that the gradient averaging in FL can be postponed to a future iteration without violating the federated computing paradigm. Under delayed gradient averaging, the UEs send their parameters to each other at the end of each computing round and immediately start the next round of local training. The averaging step is belated to a later iteration when the aggregated result is received, upon which a gradient correction term is adopted to compensate for the staleness. In this manner, the communication can be pipelined with computation, hence endowing the system with a high tolerance to communication latency. However, \cite{ZhuLinLu:21} requires each UE to pass its parameter to every other UE for gradient aggregation, which incurs hefty communication overhead, especially when the network grows in size. Even by adopting a server at the edge to take over the aggregation task, communication efficiency remains a bottleneck for the scheme. Toward this end, we incorporate analog gradient descent to circumvent the communication bottleneck of delayed gradient averaging, and show that such a marriage yields very fruitful outcomes. \begin{table} \caption{ Notation Summary } \label{table:notation} \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular}{c p{ 5.5cm } } \hline {\bf Notation} & {\hspace{2.5cm}}{\bf Definition} \\ \hline $N$; $\mathbf{s}(t)$ & Number of UEs in the network; a set of orthonormal waveforms \\ $M$; $D$ & Number of SGD iterations in one local computing round; number of local computing rounds in a global communication round \\ $f( \boldsymbol{w} )$; $\nabla f( \boldsymbol{w} )$ & Global loss function; and its gradient \\ $f_n( \boldsymbol{w} )$; $\nabla f_n( \boldsymbol{w} )$ & Local loss function of UE $n$; and its gradient \\ $x^{n}_k(t)$; $y_k(t)$ & Analog signal sent out by UE $n$ in the $k$-th communication round; analog signal received by the AP in the $k$-th communication round \\ $P_{n}$; $h_{n,k}$ & Transmit power of UE $n$; channel fading experience by UE $n$ \\ $\boldsymbol{g}_k$; $\boldsymbol{\xi}_k$ & Noisy gradient received at the AP; electromagnetic interference that follows $\alpha$-stable distribution \\ $\eta_k$ & Learning rate of the algorithm \\ $\alpha$ & Tail index of the heavy-tailed interference \\ $\boldsymbol{w}^{\langle \alpha \rangle}$; $\Vert \boldsymbol{w} \Vert_\alpha$ & Signed power $\alpha$ of a vector $\boldsymbol{w}$; $\alpha$-norm of a vector $\boldsymbol{w}$ \\ \hline \end{tabular} \end{center}\vspace{-0.63cm} \end{table} \section{System Model} We consider a wireless network consisting of one AP and $N$ UEs, as depicted in Fig.~1($b$). Each UE $n$ holds a loss function $f_n: \mathbb{R}^d \rightarrow \mathbb{R}$ that is constructed based on its local dataset. The goal of all the UEs is to jointly minimize a global objective function. More formally, they need to cooperatively find a vector $\boldsymbol{w} \in \mathbb{R}^d$ that satisfies the following: \begin{align} \label{equ:ObjFunc} \underset{{ \boldsymbol{w} \in \mathbb{R}^d }}{ \mathrm{min} } ~~ f(\boldsymbol{w}) = \frac{1}{N} \sum_{ n=1 }^N f_n( \boldsymbol{w} ). \end{align} The solution to \eqref{equ:ObjFunc} is commonly known as the empirical risk minimizer, denoted by \begin{align} \boldsymbol{w}^* = \arg\min_{ \boldsymbol{w} \in \mathbb{R}^d } f(\boldsymbol{w}). \end{align} In order to obtain the minimizer, the UEs need to conduct local training and periodically exchange the parameters for a global update. Because the AP is not equipped with a computing unit, conventional FL training schemes that rely on an edge server to perform the intermediate global aggregation and model improvement seem inapplicable in this context. That said, we will show in the sequel that by adopting analog over-the-air computing \cite{NazGas:07IT}, one can devise an FL-like model training method that is communication efficient, highly scalable, and has the same convergence rate as the paradigms that have an edge server. \begin{algorithm}[t!] \caption{ Server Free Wireless Federated Learning } \begin{algorithmic}[1] \label{Alg:Gen_FL} \STATE \textbf{Parameters:} $M$ = number of steps per local computing, $\eta_k$ = learning rate for the $k$-th round stochastic gradient descent. \STATE \textbf{Initialize:} Each agent sets $\boldsymbol{w}^n_{1,1}= \boldsymbol{w}_0 \in \mathbb{R}^d$ where $\boldsymbol{w}^0$ is a randomly generated vector and $n \in \{ 1, \cdots, N \}$. \FOR { $k = 1, \cdots, K$ } \FOR { each UE $n$ in parallel } \STATE Set the initial local parameter as follows: \begin{align} \label{equ:LocIntlz} \boldsymbol{w}_{k,1}^n &= \boldsymbol{w}_{k-1, M+1}^n \nonumber\\ &= \boldsymbol{w}_{k-1, 1}^n - \eta_{ k -1 } \left( \tilde{ \boldsymbol{g} }_{k-1} - \bar{\boldsymbol{g}}^n_{k-1} \right) \end{align} in which $\boldsymbol{w}_{i, j}^n$ denotes the $j$-th iteration at round $i$, $\tilde{ \boldsymbol{g} }_{k-1}$ is the parameters received from the AP, and $\bar{\boldsymbol{g}}^n_{k-1}$ is the locally aggregated gradient. \FOR { $i$ = 1 to $M$ } \STATE Sample $\gamma_i \in \mathcal{D}_k$ uniformly at random, and update the local parameter $\boldsymbol{w}^n_{k,i}$ as follows \begin{align} \label{equ:LocGraDscnt} \boldsymbol{w}_{k, i+1}^n = \boldsymbol{w}_{k,i}^n - \eta_k \nabla f_n(\boldsymbol{w}_{k,i}^n; \gamma_i) \end{align} \ENDFOR \STATE Compute the aggregated local gradients at round $k$ as $\bar{\boldsymbol{g}}^n_{k} = \sum_{ i=1 }^M \nabla f_n(\boldsymbol{w}_{k,i}^n; \gamma_i)$, modulate $\bar{\boldsymbol{g}}^n_{k}$ onto a set of orthonormal waveforms and simultaneously send out to the AP. \ENDFOR \STATE The AP passes the received signal to a bank of matched filters and arrives at the following \begin{align} \label{equ:AnlgGrdnt} \tilde{\boldsymbol{g}}_k = \frac{1}{N} \sum_{ n=1 }^N h_{ n,k } \, \bar{ \boldsymbol{g} }^n_k + \boldsymbol{\xi}_k \end{align} where $h_{ n,k }$ is the channel fading experienced by UE $n$ and $\boldsymbol{\xi}_k$ is the electromagnetic interference. The AP feeds back $\tilde{\boldsymbol{g}}_k$ to all UEs in a broadcast manner. \ENDFOR \STATE \textbf{Output:} $\{ \boldsymbol{w}^n_K \}_{n=1}^N$ \end{algorithmic} \end{algorithm} \section{Server Free Federated Model Training: Vanilla Version} In this section, we detail the design and analysis of a model training paradigm that achieves similar performance to FL without the help of an edge server. Owing to such a salient feature, we coin this scheme as \textit{Server Free Wireless Federated Learning (SFWFL)}. We summarize the general procedures of SFWFL in Algorithm~\ref{Alg:Gen_FL} and elaborate on the major components below. \subsection{Design} Similar to the conventional FL, SFWFL requires local trainings at the UEs, global communications of intermediate parameters, and feedback of the aggregated results. \subsubsection{Local Training} Before the training commences, all UEs negotiate amongst each other on an initial parameter $\boldsymbol{w}_0$ that is randomly generated. Then, every UE conducts $M$ steps of SGD iteration based on its own dataset and updates the locally aggregated gradient to the AP. The AP (automatically) aggregates the UEs' gradients by means of analog over-the-air computing--which will be elucidated soon--and feeds back the resultant parameters to all the UEs. Upon receiving the globally aggregated gradient, every UE replaces the locally aggregated gradient by this global parameter, as per \eqref{equ:LocIntlz}, and proceeds to the next round of local computing in accordance with \eqref{equ:LocGraDscnt}. It is important to note that by replacing the local gradients with the global one, the UEs' model parameters $\{ \boldsymbol{w}^n_k \}_{n=1}^N$ are aligned at the beginning of each local computing stage. As such, if the model training converges, every UE will have its parameters approach the same value. \subsubsection{Global Communication} During the $k$-th round of global communication, UE $n$ gathers the stochastic gradients calculated in the current computing round as $\bar{\boldsymbol{g}}^n_{k} = \sum_{ i=1 }^M \nabla f_n(\boldsymbol{w}_{k,i}^n; \gamma_i)$, and constructs the following analog signal: \begin{align} \label{equ:AnagMod} x^n_k(t) = \langle \, \mathbf{s}(t), \, \bar{\boldsymbol{g}}^n_{k} \, \rangle \end{align} where $\langle \cdot, \cdot \rangle$ denotes the inner product between two vectors and $\mathbf{s}(t) = (s_1(t), s_2(t), ..., s_d(t))$, $0 < t <T$, is a set of orthonormal baseband waveforms that satisfies: \begin{align} &\int_0^T s^2_i(t) dt = 1,~~~ i = 1, 2, ..., d \\ &\int_0^T s_i(t) s_j(t) = 0, ~~~\text{if}~ i \neq j. \end{align} In essence, operation \eqref{equ:AnagMod} modulates the amplitude of $s_i(t)$ according to the $i$-th entry of $\bar{\boldsymbol{g}}^n_{k}$ and superpositions the signals into an analog waveform. Once the transmit waveforms $\{x^n_k(t)\}_{n=1}^N$ have been assembled, the UEs send them out concurrently into the spectrum. We consider the UEs employ power control to compensate for the large-scale path loss while the instantaneous channel fading are unknown. Notably, since the waveform basis are independent to the number of UEs, this architecture is highly scalable. In other words, \textit{all the UEs can participate in every round of local training and global communication regardless of how many UEs are there in the network.} \subsubsection{Gradient Aggregation} The analog signals go through wireless medium and accumulated at the AP's RF front end. The received waveform can be expressed as follows:{\footnote{In this paper, we consider the waveforms of different UEs are synchronized. Note that the issue of signal misalignment can be addressed via \cite{ShaGunLie:21TWC}. }} \begin{align} y_k(t) = \sum_{ n=1 }^N h_{n, t} x^n_k(t) + \xi (t) \end{align} where $h_{n, t}$ is the channel fading experienced by UE $n$ and $\xi(t)$ stands for the interference. Without loss of generality, we assume the channel fading is independent and identically distributed (i.i.d.) across the agents and communication rounds, with a unit mean and variance $\sigma^2$. Furthermore, we consider $\xi(t)$ follows a symmetric $\alpha$-stable distribution \cite{ClaPedRod:20}, which is widely used in characterizing interference's statistical property in wireless networks \cite{Mid:77,WinPin:09,YanPet:03}. The AP passes the analog signal $y_k(t)$ to a bank of matched filters, where each branch is tuned to one element of the waveform basis, and outputs the vector in \eqref{equ:AnlgGrdnt}, where $\boldsymbol{\xi}_k$ is a $d$-dimensional random vector with each entry being i.i.d. and following an $\alpha$-stable distribution. The AP then broadcasts $\tilde{ \boldsymbol{g} }_k$ back to all the UEs. Owing to the high transmit power of the AP, we assume the global parameters can be received without error by all the UEs. Then, the UEs move to step-1) and launch a new round of local computing. The most remarkable feature of this model training paradigm is that it does not require an edge server to conduct global aggregation and/or model improvement. Instead, the AP exploits the superposition property of wireless signals to achieve fast gradient aggregation through a bank of match filters. At the UEs side, they replace the locally aggregated gradient with the global one at the beginning of each local computing round to align the model parameters. As will be shown next, the training algorithm converges albeit the global gradients are highly distorted. Apart from server free, it is noteworthy that since the UEs do not need to compensate for the channel fading, they can transmit at a relatively constant power level to save the hardware cost. Additionally, the random perturbation from fading and interference provides inherent privacy protection to the UEs' gradient information \cite{ElgParIss:21}. \subsection{Analysis} In this part, we derive the convergence rate to quantify the training efficiency of SFWFL. \subsubsection{Preliminary Assumptions} To facilitate the analysis, we make the following assumptions. \begin{assumption} \textit{The objective functions $f_n: \mathbb{R}^d \rightarrow \mathbb{R}$ are $\mu$-strongly convex, i.e., for any $\boldsymbol{w}, \boldsymbol{v} \in \mathbb{R}^d$ it is satisfied: \begin{align} f_n( \boldsymbol{w} ) \geq f_n( \boldsymbol{v} ) + \langle \nabla f_n( \boldsymbol{v} ), \boldsymbol{w} - \boldsymbol{v} \rangle + \frac{\mu}{2} \Vert \boldsymbol{w} - \boldsymbol{v} \Vert^2. \end{align} } \end{assumption} \begin{assumption} \textit{The objective functions $f_n: \mathbb{R}^d \rightarrow \mathbb{R}$ are $\lambda$-smooth, i.e., for any $\boldsymbol{w}, \boldsymbol{v} \in \mathbb{R}^d$ it is satisfied: \begin{align} f_n( \boldsymbol{w} ) \leq f_n( \boldsymbol{v} ) + \langle \nabla f_n( \boldsymbol{v} ), \boldsymbol{w} - \boldsymbol{v} \rangle + \frac{\lambda}{2} \Vert \boldsymbol{w} - \boldsymbol{v} \Vert^2. \end{align} } \end{assumption} \begin{assumption} \textit{The stochastic gradients are unbiased and have bounded second moments, i.e., there exists a constant $G$ such that the following holds: \begin{align} \mathbb{E}\left[ \Vert \nabla f_n(\boldsymbol{w}_{k,i}^n; \gamma_i) \Vert^2 \right] \leq G^2. \end{align} } \end{assumption} Because the interference follows an $\alpha$-stable distribution, which has finite moments only up to the order $\alpha$, the variance of the globally aggregated gradient in \eqref{equ:AnlgGrdnt} may be unbounded. As such, conventional approaches that rely on the existence of second moments cannot be directly applied. In order to establish a universally applicable convergence analysis, we opt for the $\alpha$-norm as an alternative. Based on this metric, we introduce two concepts, i.e., the \textit{signed power} and \textit{$\alpha$-positive definite matrix} \cite{WanGurZhu:21}, in below. \begin{definition} \textit{For a vector $\boldsymbol{w} = (w_1, ..., w_d)^{\mathsf{T}} \in \mathbb{R}^d$, we define its signed power as follows \begin{align} \boldsymbol{w}^{\langle \alpha \rangle } = \left( \mathrm{sgn}(w_1) \vert w_1 \vert^\alpha, ..., \mathrm{sgn}(w_d) \vert w_d \vert^\alpha \right)^{\mathsf{T}} \end{align} where $\mathrm{sgn}(x) \in \{ -1, +1 \}$ takes the sign of the variable $x$. } \end{definition} \begin{definition} \textit{A symmetric matrix $\boldsymbol{Q}$ is said to be $\alpha$-positive definite if $\langle \boldsymbol{v}, \boldsymbol{Q} \boldsymbol{v}^{\langle \alpha - 1 \rangle} \rangle > 0$ for all $\boldsymbol{v} \in \mathbb{R}^d$ with $\Vert \boldsymbol{v} \Vert_\alpha > 1$. } \end{definition} Armed with the above definitions, we make another assumption as follows. \begin{assumption} \textit{For any given vector $\boldsymbol{w} \in \mathbb{R}^d$, the Hessian matrix of $f(\boldsymbol{w})$, i.e., $\nabla^2 f(\boldsymbol{w})$, is $\alpha$-positive definite. } \end{assumption} Furthermore, since each element of $\boldsymbol{\xi}_k$ has a finite $\alpha$ moment, we consider the $\alpha$ moment of $\boldsymbol{\xi}_k$ is upper bounded by a constant $C$, i.e., $\mathbb{E}[ \Vert \boldsymbol{\xi}_k \Vert_\alpha^\alpha ] \leq C$. \subsubsection{Convergence Rate of SFWFL} We lay out two technical lemmas that we would use extensively in the derivation. \begin{lemma} \textit{Given $\alpha \in [1, 2]$, for any $\boldsymbol{w}, \boldsymbol{v} \in \mathbb{R}^d$, the following holds: \begin{align} \Vert \boldsymbol{w} + \boldsymbol{v} \Vert_\alpha^\alpha \leq \Vert \boldsymbol{w} \Vert_\alpha^\alpha + \alpha \langle \boldsymbol{w}^{\langle \alpha - 1 \rangle }, \boldsymbol{v} \rangle + 4 \Vert \boldsymbol{v} \Vert_\alpha^\alpha. \end{align} } \end{lemma} \begin{IEEEproof} Please refer to \cite{Kar:69}. \end{IEEEproof} \begin{lemma} \textit{Let $\boldsymbol{Q}$ be an $\alpha$-positive definite matrix, for $\alpha \in [1, 2]$, there exists $\kappa, L >0$, such that \begin{align} \Vert \boldsymbol{I} - k \boldsymbol{Q} \Vert_\alpha^\alpha \leq 1 - L k, \qquad \quad \forall k \in [0, \kappa) \end{align} } \end{lemma} \begin{IEEEproof} Please see Theorem~10 of \cite{WanGurZhu:21}. \end{IEEEproof} Since the UEs' model parameters are aligned at the beginning of each local computing round, i.e., $\boldsymbol{w}^1_{k,1} = \boldsymbol{w}^2_{k,1} = \cdots = \boldsymbol{w}^N_{k,1}$, we denote such a quantity as $\boldsymbol{w}_k$ and present the first theoretical finding below. \begin{theorem} \label{thm:ConvAnals} \textit{Under the employed wireless system, if the learning rate is set as $\eta_k = \theta/k$ where $\theta > \frac{ \alpha - 1 }{ M L }$, then Algorithm-1 converges as: \begin{align} \label{equ:ConvRt_OAML} &\mathbb{E}\big[ \Vert \boldsymbol{w}_k - \boldsymbol{w}^* \Vert_{\alpha}^\alpha \big] \nonumber\\ &\leq \! \frac{ 4 \theta^\alpha \Big( C + d^{ 1 - \frac{1}{\alpha} } G^\alpha M^\alpha \big( \lambda^\alpha M^\alpha + \frac{ \sigma^\alpha }{ N^{\alpha/2} } \big) \Big) }{ \mu M L - \alpha + 1 } \cdot \frac{1}{ k^{\alpha - 1} }. \end{align} } \end{theorem} \begin{IEEEproof} See Appendix~\ref{Apndx:ConvAnals_proof}. \end{IEEEproof} We highlight a few important observations from this result. \remark{ \textit{If the interference follows a Gaussian distribution, i.e., $\alpha = 2$, the SFWFL converges in the order of $\mathcal{O}( \frac{1}{ k } )$, which is identical to those run under federated edge learning systems \cite{SerCoh:20TSP}. As such, the proposed model training algorithm can attain the same efficacy of conventional edge learning without the requirement of a server. } } \remark{ \textit{The interference's tail index, $\alpha$, plays an essential role in the convergence rate. Specifically, the smaller the value of $\alpha$, the heavier the tail in the interference distribution. And that leads to slower convergence of the learning algorithm. } } \remark{ \textit{Advanced signal processing techniques can be adopted for denoising the aggregated gradient, which results in reducing the variance of channel fading, $\sigma$, and accelerating the model training process. The effect is reflected in the multiplier of the convergence rate. } } \remark{ \textit{An increase in the number of UEs, $N$, can mitigate the impact of channel fading and speedup the convergence rate. Therefore, scaling up the system is, in fact, beneficial to the model training under SFWFL. } } \begin{algorithm}[t!] \caption{ Zero-Wait SFWFL } \begin{algorithmic}[1] \label{Alg:ZW_SFWFL} \STATE \textbf{Parameters:} $M$ = number of steps per local computing, $D$ = communication latency represented in the number of computing rounds, $\eta_k$ = learning rate for the $k$-th round stochastic gradient descent. \STATE \textbf{Initialize:} Each agent sets $\boldsymbol{w}^n_{1,1}= \boldsymbol{w}_0 \in \mathbb{R}^d$ where $\boldsymbol{w}^0$ is a randomly generated vector and $n \in \{ 1, \cdots, N \}$. \FOR { $k = 1, \cdots, K + KD$ } \FOR { each UE $n$ in parallel } \STATE Set the initial local parameter as follows: \begin{align} \label{equ:ZW_Intliz} \boldsymbol{w}_{k,1}^n = \! \begin{cases} \boldsymbol{w}_{k-1,1}^n - \eta_{k-1} \bar{\boldsymbol{g}}^n_{k-1}, ~~~ \mbox{if } \mbox{$k$ mod $D \neq 1$}, \\ \quad\\ \boldsymbol{w}_{k-1,1}^n - \eta_{k-1} \bar{\boldsymbol{g}}^n_{k-1} \nonumber\\ \qquad - \eta_{ k -D } \left( \tilde{ \boldsymbol{g} }_{k-D} - \bar{\boldsymbol{g}}^n_{k-D} \right), ~ \mbox{otherwise.} \end{cases} \end{align} where $\boldsymbol{w}_{k-1, M+1}^n$ stands for the parameters calculated in round ($k-1$), $\tilde{ \boldsymbol{g} }_{k-D}$ is the global gradient received from the AP, and $\bar{\boldsymbol{g}}^n_{k-D}$ is the corresponding locally aggregated gradient. \FOR { $i$ = 1 to $M$ } \STATE Sample $\gamma_i \in \mathcal{D}_k$ uniformly at random, and update the local parameter $\boldsymbol{w}^n_{k,i}$ as per \eqref{equ:LocGraDscnt}. \ENDFOR \IF {$k$ mod $D = 1$} \STATE Compute the aggregated local gradients at round $k$ as $\bar{\boldsymbol{g}}^n_{k} = \sum_{ i=1 }^M \nabla f_n(\boldsymbol{w}_{k,i}^n; \gamma_i)$, modulate $\bar{\boldsymbol{g}}^n_{k}$ onto a set of orthonormal waveforms and simultaneously send out to the AP. \ENDIF \ENDFOR \STATE Upon receiving signals from the UEs, the AP passes the received waveform to a bank of matched filters and obtains $\tilde{\boldsymbol{g}}_k$ as \eqref{equ:AnlgGrdnt}. The AP then feeds back $\tilde{\boldsymbol{g}}_k$ to all UEs in a broadcast manner. \ENDFOR \STATE \textbf{Output:} $\{ \boldsymbol{w}^n_{K + KD} \}_{n=1}^N$ \end{algorithmic} \end{algorithm} \begin{figure*}[t!] \centering {\includegraphics[width=1.98\columnwidth]{Figures/Com_Comp.pdf}} \caption{ Comparison among the ($a$) compute-and-wait, ($b$) zero wait, and ($c$) aggressively updating model training schemes. In ($a$), local computing and global updating are conducted sequentially. In ($b$), local computing is executed in parallel to global communication. In ($c$), locally trained results are sent out upon finishing each round of on-device computing. } \label{fig:FL_commun_comparison} \end{figure*} \section{Server Free Federated Model Training: Zero-Wait Version} This section improves the learning algorithm, allowing UEs to continue their local training without waiting for the global parameters. As a result, the system run time can be largely reduced. Due to the property that UEs nolonger need to wait for the global results, we call the proposed scheme \textit{Zero Wait SFWFL}. More details are provided below. \subsection{Design} To simplify the notation use, let us assume every UE in the network spent the same amount of time in parameter uploading and downloading during each round of global communication. The communication time spreads over $D$ local computing rounds, which is in total $MD$ SGD iterations. In order to better illustrate the concepts, we slightly abuse the notation $k$ in this section by referring to it as the $k$-th \textit{computing} round. Then, as outlined in Algorithm~\ref{Alg:ZW_SFWFL}, the Zero-Wait SFWFL no longer freezes the local computation power during the communication. Specifically, in the first round of local computing, each UE $n$ executes $M$ SGD iterations and arrives at the following: \begin{align} \boldsymbol{w}^n_{1, M + 1} &= \boldsymbol{w}^n_{1, M} - \eta_1 \nabla f_n( \boldsymbol{w}^n_{1,M}; \gamma_{1,M} ) \nonumber\\ &= \cdots = \boldsymbol{w}_0 - \eta_1 \sum_{ i = 1 }^M \nabla f_n( \boldsymbol{w}^n_{1,i}; \gamma_{1,i} ) \nonumber\\ &= \boldsymbol{w}_0 - \eta_1 \, \bar{ \boldsymbol{g} }^n_1. \end{align} Upon finishing the first computating round, the UEs simultaneously send their accumulated gradients $\{ \bar{\boldsymbol{g}}^n_1 \}_{ n=1 }^N$ to the AP via analog transmissions. Then, unlike SFWFL, the UEs do not wait for the return of the global aggregation but immediately proceed to the next round of local computation. By the time the globally aggregated gradients are received, UE $n$ has already performed $D$ extra rounds of local updates and its model parameter can be written as: \begin{align} \boldsymbol{w}^n_{1+D, 1} &= \boldsymbol{w}^n_{D, M} - \eta_D \nabla f_n( \boldsymbol{w}^n_{D,M}; \gamma_{D,M} ) \nonumber\\ &= \boldsymbol{w}^n_{D, 1} - \eta_D \, \bar{ \boldsymbol{g} }^n_D \nonumber\\ &= \boldsymbol{w}_0 - \eta_D \bar{ \boldsymbol{g} }^n_D - \eta_{D-1} \bar{ \boldsymbol{g} }^n_{D-1} - \cdots - \eta_1 \bar{ \boldsymbol{g} }^n_1. \end{align} As the aggregated gradient, $\tilde{\boldsymbol{g}}_1$, of the first round is now available, UE $n$ can substitute all the first round local gradients by the global one, which yields: \begin{align} \boldsymbol{w}^n_{1+D, 1} &= ( \boldsymbol{w}_0 - \eta_1 \tilde{\boldsymbol{g}}_1 ) - \eta_D \bar{ \boldsymbol{g} }^n_D - \cdots - \eta_2 \bar{ \boldsymbol{g} }^n_2 \nonumber\\ &= \boldsymbol{w}^n_{D, 1} - \eta_D \, \bar{ \boldsymbol{g} }^n_D - \eta_1 ( \tilde{\boldsymbol{g}}_1 - \bar{ \boldsymbol{g} }^n_1 ). \end{align} This operation serves as the cornerstone of Algorithm~2. By successively replacing the local gradients $\bar{ \boldsymbol{g} }^n_k$ by the globally averaged $\tilde{\boldsymbol{g}}_k$, where $k \geq 2$, the UE $n$ model parameters $\boldsymbol{w}^n_k$ approach to that in \eqref{equ:LocIntlz}. As such, the local models converge to the global optimality. A comparison between the sequential training procedure where computations are suspended during global communications and the zero wait learning scheme that pipelines the on-device computing and parameter updating is provided in Fig.~2($a$) and Fig.~2($b$). This figure shows that the zero wait operation moves the gradient averaging to a later stage. Since the UEs no longer need to wait for the global model, they can compute local updates restlessly. In this sense, the devices' processing powers are fully exploited, which significantly accelerates the training process. \subsection{Analysis} In this part, we formally demonstrate the convergence of Zero-Wait SFWFL based on the assumptions made in Section~IV-B-1). To facilitate the presentation, we denote the initialized model parameter of UE $n$ in computing round $k$ as $\boldsymbol{w}^n_k$, i.e., $\boldsymbol{w}^n_k = \boldsymbol{w}^n_{k,1}$. We further denote $\bar{\boldsymbol{w}}_k$ as the averaged model parameter, given by \begin{align} \bar{ \boldsymbol{w} }_k = \frac{ 1 }{ N } \sum_{ n=1 }^N \boldsymbol{w}^n_k. \end{align} Then, according to Algorithm~\ref{Alg:ZW_SFWFL}, the following holds \begin{align} \label{equ:Glbl_PrmAve} \bar{ \boldsymbol{w} }_{k+1} = \bar{ \boldsymbol{w} }_k - \frac{ \eta_k }{ N } \sum_{ n=1 }^N \bar{\boldsymbol{g}}^n_k - \eta_{ k - D } \! \underbrace{ \left( \tilde{\boldsymbol{g}}_{k-D} - \frac{ 1 }{ N } \sum_{ n=1 }^N \bar{ \boldsymbol{g} }^n_{k-D} \right) }_{ \text{ gradient correction } }. \end{align} This result indicates that the UEs' gradients are aligned from round $(k-D)$ backwards via replacing the locally accumulated gradients by the global one, which is reflected by the gradient correction in \eqref{equ:Glbl_PrmAve}. As such, parameters of the UEs only differ on the most recent $D$ local gradients, which can be characterized formally as follows. \begin{lemma} \label{lma:PrmtUnivBnd} \textit{The difference between the model parameter of UE $n$ and that averaged across all UEs is uniformly bounded by the following: \begin{align} \label{equ:GrdVar_Bnd} \mathbb{E}\left[ \Vert \boldsymbol{w}^n_k - \bar{ \boldsymbol{w} }_k \Vert^2_2 \right] \leq 4 \eta^2_{ k - D } \, M^2 D^2 G^2. \end{align} } \end{lemma} \begin{IEEEproof} See Appendix~\ref{Apndx:PrmtUnivBnd_proof}. \end{IEEEproof} From \eqref{equ:GrdVar_Bnd}, we can see that if the system adopts a diminishing learning rate, the UE's local parameters will converge to the same global average. In this regard, we can concentrate on the convergence performance of $\bar{ \boldsymbol{w} }_k$, which is given by the following theorem. \begin{theorem} \label{thm:ZW_SFWFL_ConvAnals} \textit{Under the employed wireless system, if the learning rate is set as $\eta_k = \theta/k$ where $\theta > \frac{ \alpha - 1 }{ M L }$, then Algorithm-2 converges as: \begin{align} \label{equ:ConvRt_ZW_SFWFL} &\mathbb{E}\big[ \Vert \bar{ \boldsymbol{w}}_k - \boldsymbol{w}^* \Vert_{\alpha}^\alpha \big] \nonumber\\ &\leq \! \frac{ 4 \theta^\alpha \Big( C + \frac{ \sigma^\alpha M^\alpha G^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{\alpha/2} } + d^{ 1 - \frac{1}{\alpha} } 2^\alpha \lambda^\alpha ( 1 \!+\! D )^\alpha G^\alpha M^{2 \alpha} \Big) }{ \big( \, \mu M L - \alpha + 1 \, \big) \cdot k^{ \alpha - 1 } } . \end{align} } \end{theorem} \begin{IEEEproof} See Appendix~\ref{Apndx:ZW_SFWFL_ConvAnals_proof}. \end{IEEEproof} We can conclude the following observations from this result. \remark{ \textit{While an increase in the communication latency, $D$, will slow down the model training, Zero-Wait SFWFL converges in the order of $\mathcal{O}( \frac{1}{ k^{ \alpha - 1 } } )$ , which is the same as those in conventional wireless FL systems. Additionally, since the communication is fully covered by local computing, the total run time can be reduced by a factor of $D$. Such an improvement is particularly pronounced in mobile applications where communication delays are orders of magnitudes higher than the on-device computing \cite{LanLeeZho:17}.} } \remark{ \textit{The proposed algorithm can be further accelerated by running the local SGD iterations in tandem with the momentum method \cite{YanCheQue:21JSTSP}. And one shall adequately adjust the momentum controlling factor to achieve the best training performance. } } \subsection{Special Case} We can further reap the potential of the zero-wait learning method in Algorithm~\ref{Alg:ZW_SFWFL} for fast model training. Particularly, as depicted in Fig.~2($c$), the UEs collaboratively train the model on the basis of Zero-Wait SFWFL. In lieu of uploading local gradients to the AP only when global parameters are received, the UEs send their accumulated gradients upon completing each round of local training. As such, the UEs can always replace a local gradient by the global average in the subsequent computing rounds, making the local parameters differ from the global one in only one round of computation. Consequently, the convergence rate can be obtained as the following. \begin{corollary} \textit{Under the setting in this section, if the learning rate is set as $\eta_k = \theta/k$ where $\theta > \frac{ \alpha - 1 }{ M L }$, then Algorithm-2 converges as: \begin{align} \label{equ:ConvRt_FPZW_SFWFL} &\mathbb{E}\big[ \Vert \bar{ \boldsymbol{w}}_k - \boldsymbol{w}^* \Vert_{\alpha}^\alpha \big] \nonumber\\ &\leq \! \frac{ 4 \theta^\alpha \Big( C + \frac{ \sigma^\alpha M^\alpha G^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{\alpha/2} } + d^{ 1 - \frac{1}{\alpha} } 4^\alpha \lambda^\alpha G^\alpha M^{2 \alpha} \Big) }{ \big( \, \mu M L - \alpha + 1 \, \big) \cdot k^{ \alpha - 1 } } . \end{align} } \end{corollary} \begin{IEEEproof} The result follows by substituting $D=1$ in \eqref{equ:ConvRt_ZW_SFWFL}. \end{IEEEproof} Following Corollary~1, we can see that the convergence rate does not depend on the communication latency $D$. This observation implies that by aggressively uploading local parameters in each computing round, Zero-Wait SFWFL can mitigate the straggler issue of FL, at the cost of additional communication expenditure. \section{ Simulation Results }\label{sec:NumResult} In this section, we conduct experimental evaluations of the proposed SFWFL algorithm. Particularly, we examine the performance of the proposed algorithm for two different tasks: ($i$) training a multi layer perceptron (MLP) on the MNIST dataset which contains the hand-written digits \cite{LeCBotBen:98} and ($ii$) learning a convolutional neural network (CNN) on the CIFAR-10 dataset \cite{KriHin:09}. The MLP is consisted of 2 hidden layers, each has 64 units and adopts the ReLu activations. We extract 60,000 data points from the MNIST dataset for training, where each UE is assigned with an independent portion that contains 600 data samples. For the non-IID setting on MNIST dataset, we use the 2-class configuration \cite{MaMMooRam:17AISTATS}, in which each UE is assigned with images from at most 2 classes. The CIFAR-10 dataset consists of 60,000 colour images in 10 classes, with 6000 images per class. And the CNN has two convolutional layers with a combination of max pooling, followed by two fully-connected layers, then a softmax output layer. We extract 50,000 data points from the CIFAR-10 dataset for training, where each agent is assigned with an independent portion that contains 500 data samples. We allocate 10,000 data points for testing. Furthermore, we adopt the Rayleigh fading to model the channel gain. Unless otherwise stated, the following parameters will be used: Tail index $\alpha = 1.6$, number of agents $N=100$, average channel gain $\mu = 1$. The experiments are implemented with Pytorch on Tesla P100 GPU and averaged over 3 trials. \subsection{Numerical Results} \begin{figure}[t!] \centering \subfigure[\label{fig:2a}]{\includegraphics[width=0.95\columnwidth]{Figures/new_with_server.eps}} ~ \subfigure[\label{fig:2b}]{\includegraphics[width=0.95\columnwidth]{Figures/new_cifar_with_server.eps}} \caption{ Comparison of convergence rates between SFWFL and a server-based FL: ($a$) training the MLP on the MNIST dataset and ($b$) training the CNN on the CIFAR-10 dataset. } \label{fig:ComprSFWFLwFL} \end{figure} \begin{figure}[t!] \centering \subfigure[\label{fig:3a}]{\includegraphics[width=0.95\columnwidth]{Figures/ServerFree_alpha.eps}} ~ \subfigure[\label{fig:3b}]{\includegraphics[width=0.95\columnwidth]{Figures/cifar_alpha.eps}} \caption{ Simulation results of the training loss under SFWFL: ($a$) training the MLP on the MNIST dataset and ($b$) training the CNN on the CIFAR-10 dataset. } \label{fig:AlphaEffc} \end{figure} Fig.~\ref{fig:ComprSFWFLwFL} compares the convergence performance between SFWFL and the conventional setting of FL based on an edge server. Specifically, a typical server-based FL has an architecture as Fig.~\ref{fig:FL_Systems_comparison}($a$): In each communication round $k$, an edge server collects the gradients from a subset of UEs, denoted as $\mathcal{S}_k$, to improve the global model and feeds back the update back to them. In this experiment, we set the total number of UEs to be $N=100$ and the tail index as $\alpha=2$, namely, the interference obeys a Gaussian distribution. We consider the server-based FL employs digital communication (which involves encoding/decoding and modulation/demodulation processes) to transmit the parameters. Hence, the UEs' gradients can be received by the server without error in the conventional FL setting. We vary the size of $\mathcal{S}_k$ to reflect the constraint of communication resources of the system. From this figure, we observe that the convergence rates of SFWFL and server-based FL with full communication resources (i.e. $\mathcal{S}_k = 100$) closely match with each other. As such, SFWFL attains the exhaustive power of server-based FL in the absence of a costly edge server. Another, perhaps more strikingly, message conveyed by Fig.~\ref{fig:ComprSFWFLwFL} is that, when there are insufficient communication resources, SFWFL that adopts analog transmissions (hence resulting in noisy gradients) can even outperform an FL that is based on digital communications (which promote error-free gradients) in terms of convergence rate. Since analog transmissions can be achieved by elementary communication components such as amplitude modulation and match filtering, the conclusion from this observation seems to put in vain all the efforts we have spent in developing better communication and signal processing technologies that enhance the quality of aggregated gradients. The following simulation result shows that it is not necessarily true. Fig.~\ref{fig:AlphaEffc} plots the training loss of SFWFL as a function of the communication rounds for a varying value of the tail index. The figure shows that ($a$) the training loss decays steadily along with the communication rounds, regardless of the heaviness of the tail in the interference distribution, and ($b$) the tail index $\alpha$ plays a critical role in the rate of convergence. Notably, an increase in the tail index leads to a significant speedup in the convergence rate, whereas the improvement is non-linear with respect to $\alpha$. These observations are in line with Remark~2. Therefore, interference is the dominating factor in the performance of SFWFL. In networks with adequate interference, this algorithm has a delicate convergence rate. However, in the presence of strong interference, the convergence performance quickly deteriorates. \begin{figure}[t!] \centering \subfigure[\label{fig:4a}]{\includegraphics[width=0.95\columnwidth]{Figures/fading.eps}} ~ \subfigure[\label{fig:4b}]{\includegraphics[width=0.95\columnwidth]{Figures/new_cifar_fading.eps}} \caption{ Simulation results of the training loss under SFWFL: ($a$) training the MLP on the MNIST dataset and ($b$) training the CNN on the CIFAR-10 dataset. } \label{fig:FadingEffc} \end{figure} Because of analog transmissions, the aggregated gradients in SFWFL is perturbed by random channel fading. We investigate such an effect in Fig.~\ref{fig:FadingEffc}. Particularly, the figure draws the convergence curve under different conditions of the channel fading. We observe that ($a$) a high variance in the channel fading increases the chance of encountering deep fade in the transmission, which inflicts the model training process and ($b$) unlike the effect of interference, variance in the channel fading only brings a mild influence on the convergence rate. This observation coincides with the theoretical finding in Remark~3. Fig.~\ref{fig:PartNumEffc} evaluates the scaling effect of SFWFL, in which the training loss versus communication rounds curve is depicted under different numbers of UEs in the network. We can see that the algorithm's convergence rate increases with respect to $N$, confirming the conclusion in Remark~4 that enlarging the number of UEs is beneficial for the system. The reason can be ascribed to two crucial facts: ($a$) as each UE owns an individual dataset, an increase in $N$ allows the aggregated gradient to ingest more data information in every round of global iteration, because all the UEs can concurrently access the radio channel and upload their locally trained parameters, and ($b$) more UEs participating in the analog transmission can reduce the impact of channel fading, as explained in Remark~4. Nevertheless, we shall also emphasize that such an effect is less significant compared to the tail index because it only influences the multiplier in the convergence rate. \begin{figure}[t!] \centering{} {\includegraphics[width=0.95\columnwidth]{Figures/ServerFree_scaling.eps}} \caption{ Simulation results of the training loss of MLP on the MNIST data set, under different number of UEs $N$. } \label{fig:PartNumEffc} \end{figure} Finally, we put the spotlight on the Zero-Wait SFWFL. Since the notion of global iteration refers to different time scales under the zero wait and compute-and-wait versions of SFWFL, we define the speedup via restless computing as follows: \begin{align*} \text{Speedup} = \frac{K(M+DM+\tau_{\mathrm{L}})}{K(M+\tau_{\mathrm{G}})} \end{align*} where $\tau_{\mathrm{L}}$ and $\tau_{\mathrm{G}}$ represent the local and global aggregation times, respectively. Then, we summarize the accuracy and run time comparison under different scales of communication latency in Table~\ref{table:Comparison_ZW_CaW}. The results amply demonstrate that upon reaching the same accuracy, Zero-Wait SFWFL attains a substantial speedup in the high latency regime. Such an improvement mainly attributes to that ($a$) Zero-Wait SFWFL maintains the same communication frequency as conventional SFWFL and ($b$) the local computation is pipelined with global communication. \begin{table*}[t!] \centering \begin{tabular}{lcccccccc} \toprule \multirow{2}{*}{\bf{Algorithm}} & \multicolumn{2}{c}{\bf{Compute-and-Wait}} & \multicolumn{2}{c}{\bf{Zero Wait}} & \multicolumn{2}{c}{\bf{Zero Wait}} &\multicolumn{2}{c}{\bf{Zero Wait}} \\ & \multicolumn{2}{c}{$M=5$} & \multicolumn{2}{c}{$M=5,D=1$} & \multicolumn{2}{c}{$M=5,D=2$} & \multicolumn{2}{c}{$M=5,D=4$} \\ \cmidrule(){2-3} \cmidrule(){4-5} \cmidrule(){6-7} \cmidrule(){8-9} & Accuracy & Speedup & Accuracy & Speedup & Accuracy & Speedup & Accuracy & Speedup \\ \midrule \addlinespace[0.12cm] MNIST IID & 85.9& 1$\times$ & 85.6 &1.9$\times$ &85.1& 2.9$\times$ & 85.0& 4.7$\times$ \\ \addlinespace[0.12cm] MNIST non-IID & 75.7& 1$\times$ & 75.5 &1.8$\times$ &74.7& 2.8$\times$ & 75.4& 4.6$\times$ \\ \addlinespace[0.08cm] \addlinespace[0.04cm] CIFAR-10 IID &76.1& 1$\times$ & 76.2 &2.0$\times$ &73.9&2.9$\times$ & 74.4& 4.8$\times$ \\ \bottomrule \addlinespace[0.08cm] \end{tabular} \caption{Performance comparison of Zero Wait and Compute-and-Wait} \label{table:Comparison_ZW_CaW} \end{table*} \subsection{Discussions} The main takeaways from this section are outlined as follows. \subsubsection{Federated edge learning is readily achievable, do not procrastinate}Speaking of FL, we often identify it as one of the \textit{future} technologies. We assume FL will only occur in wireless systems beyond 5G or near 6G, where edge computing resources may be abundantly available. The mere fact is that end-user devices, e.g., smartphones, tablets, or laptops, already have substantial processing powers. At the same time, most of the network edge elements, e.g., WiFi hot spots, APs, or base stations, can only perform elementary signal processing functions. That said, it shall not discourage us from realizing a collaboratively intelligent system from the present wireless networks. The SFWFL developed in this paper demonstrates a new way to build large-scale FL on the currently available infrastructure. And surprisingly, the scheme attains a convergence performance comparable to an FL system availed with an edge server. \subsubsection{No free lunch is (still) the first principle}Although SFWFL removes the edge server from the overall system and can be implemented by low-cost hardware, its functionality is accomplished at the expense of ($a$) additional occupation in UEs' local memory, as they need to store both the local model parameter and accumulated gradient so as to replace the later with the globally averaged ones and remedy the former. Such an expenditure in storage becomes more critical in the Zero-Wait version of SFWFL, since the UEs need to cache multiple accumulated gradients along the local computing process; and ($b$) the potential of dreadful degradation in the performance rate, as the tail index of interference distribution is directly affecting the exponential factor of the convergence rate. The model training process may be severely slowed down in the presence of strong electromagnetic interference. \subsubsection{Elephant in the room}To boost communication efficiency in FL, a plethora of techniques have been developed, ranging from compressing the local parameters, pruning the computing architecture, to developing better UE scheduling policies. The common goal of these methods is to reduce the parameter transmission time such that UEs do not need to wait for too long before they can receive the improved global model and perform a new round of local training. The proposed Zero-Wait SFWFL points out that the on-device computing can be executed in parallel to the parameter updating. Therefore, the bottleneck of wireless FL is not in the communication but the algorithm design. We expect disclosing this fact future can promote further research pursuits to develop better algorithms that enhance the FL system. \section{ Conclusion }\label{sec:Conclusion} In this paper, we established the SFWFL architecture, which exploits the superposition property of analog waveform to achieve FL training in a wireless network without using an edge server. Unlike the fully decentralized version of FL, we do not abandon the star connection. Instead, we lean on such a centralized structure for fast and scalable learning in a network with massively distributed UEs. The proposed SFWFL not only boasts a very high communication efficiency, but also can be implemented in low-cost hardware and has a built-in privacy enhancement. We also improved the training algorithm to parallel the UEs' local computing with global parameter transmissions. In order to evaluate the developed framework, we derived the convergence rates for both SFWFL and its improved version, coined as Zero-Wait SFWFL. The analytical results revealed that SFWFL can attain a similar, or even outperform the convergence rate of a server-based FL, if the interference has a Gaussian distribution. It also showed that the convergence performance of SFWFL is sensitive to the heavy tailedness of interference distribution, whereas the convergence rate deteriorates quickly as the interference tail index decreases. Yet, running local computation in concurrence with global communications is always beneficial to reducing the system run time, and the gain is particularly pronounced in high communication latency. These theoretical findings have been validated through excessive simulations. The architecture, algorithm, and analysis developed in this paper have set up a new distributed learning paradigm in which many future researches can nest and grow. For instance, one can explore the effects of adopting multiple antennas at the AP, which can be used to manoeuvre power boosting and/or interference cancellation \cite{YanGerQue:17}, on the performance of SFWFL. Investigating the impacts of UEs' mobility on the system's performance is also a concrete direction \cite{FenYanHu:21TWC}. Another future extension of the present study is to reduce the sensitivity of SFWFL to heavy-tailed interference via, e.g., the gradient clipping schemes \cite{GorDanGas:20} \begin{appendix} \subsection{Proof of Theorem~\ref{thm:ConvAnals}} \label{Apndx:ConvAnals_proof} For ease of exposition, we denote $\Delta_k = \boldsymbol{w}_k - \boldsymbol{w}^*$. According to \eqref{equ:ZW_Intliz}, we have \begin{align} \boldsymbol{w}_{k+1} = \boldsymbol{w}_k - \eta_k \Big( \frac{1}{N} \sum_{ n = 1 }^N h_{n,k} \bar{ \boldsymbol{g} }^n_k + \boldsymbol{\xi}_k \Big). \end{align} Then, using Lemma~1, we can expand the expectation on the $\alpha$-moment of $\Delta_{k+1}$ via the following: \begin{align} \label{equ:Delta_Bound} &\mathbb{E} \Big[ \big\Vert \Delta_{k+1} \big\Vert_\alpha^\alpha \Big] = \mathbb{E} \Big[ \Big\Vert \Delta_{k} - \eta_k \big( \frac{1}{N} \sum_{ n = 1 }^N h_{n,k} \bar{ \boldsymbol{g} }^n_k + \boldsymbol{\xi}_k \big) \Big\Vert_\alpha^\alpha \Big] \nonumber\\ &\leq \underbrace{ \mathbb{E} \Big[ \big\Vert \Delta_k - \eta_k M \nabla f( \boldsymbol{w}_k ) \big\Vert_\alpha^\alpha \Big] }_{Q_1} \nonumber\\ &+ 4 \eta_k^\alpha \underbrace{ \mathbb{E} \Big[ \big\Vert M \nabla f( \boldsymbol{w}_k ) - \frac{1}{N} \sum_{ n = 1 }^N \bar{ \boldsymbol{g} }^n_k \big\Vert_\alpha^\alpha \Big] }_{Q_2} \nonumber\\ &+ 4 \eta_k^\alpha \underbrace{ \mathbb{E} \Big[ \big\Vert \frac{1}{N} \sum_{ n = 1 }^N \big( h_{n,k} - 1 \big) \bar{ \boldsymbol{g} }^n_k \big\Vert_\alpha^\alpha \Big] }_{Q_3} +\, 4 C \eta_k^\alpha. \end{align} By leveraging Lemma~2, we bound $Q_1$ as follows: \begin{align} \label{equ:Q1_bound} Q_1 &\stackrel{(a)}{=} \mathbb{E} \Big[ \big\Vert \Delta_k - \eta_k M \big(\, \nabla f ( \boldsymbol{w}_k ) - \nabla f ( \boldsymbol{w}^* ) \,\big) \big\Vert_\alpha^\alpha \Big] \nonumber\\ &= \mathbb{E} \Big[ \big\Vert \Delta_k - \mu \eta_k \nabla^2 f ( \boldsymbol{w}^{\sharp}_k ) \Delta_k \big\Vert_\alpha^\alpha \Big] \nonumber\\ & \leq \Big\Vert \boldsymbol{I}_d - \eta_k M \nabla^2 f ( \boldsymbol{w}^{\sharp}_k ) \Big\Vert_\alpha^\alpha \times \mathbb{E} \Big[ \big\Vert \Delta_k \big\Vert_\alpha^\alpha \Big] \nonumber\\ & \stackrel{(a)}{=} ( 1 - \eta_k M L ) \times \mathbb{E} \Big[ \big\Vert \Delta_k \big\Vert_\alpha^\alpha \Big] \end{align} where ($a$) holds because $\nabla f ( \boldsymbol{w}^* ) = 0$. Next, we establish the following bound for $Q_2$: \begin{align} \label{equ:Q2_bound} & Q_2 = \mathbb{E} \Big[ \big\Vert \frac{1}{N} \sum_{n=1}^N \sum_{ i=1 }^M \big( \nabla f_n ( \boldsymbol{w}^n_{k,i}; \gamma_i ) - \nabla f_n ( \boldsymbol{w}_k ) \big) \big\Vert_\alpha^\alpha \Big] \nonumber\\ &\stackrel{(a)}{ \leq } \! d^{ 1 - \frac{1}{\alpha} } \! \cdot \! \mathbb{E} \Big[ \Big( \big\Vert \frac{1}{N} \! \sum_{n=1}^N \sum_{ i=1 }^M \! \big( \nabla f_n ( \boldsymbol{w}^n_{k,i}; \gamma_i ) -\! \nabla f_n ( \boldsymbol{w}_k ) \big) \big\Vert_2^2 \Big)^{ \!\! \frac{ \alpha }{ 2 } } \Big] \nonumber\\ &\stackrel{(b)}{ \leq } \! d^{ 1 - \frac{1}{\alpha} } \! \cdot \! \Big( \frac{1}{N} \! \sum_{n=1}^N \mathbb{E} \Big[ \big\Vert \sum_{ i=1 }^M \! \big( \nabla f_n ( \boldsymbol{w}^n_{k,i}; \gamma_i ) -\! \nabla f_n ( \boldsymbol{w}_k ) \big) \big\Vert_2^2 \Big] \Big)^{ \! \frac{ \alpha }{ 2 } } \nonumber\\ &\stackrel{(c)}{ \leq } \! d^{ 1 - \frac{1}{\alpha} } \! \cdot \! \Big( \frac{ \lambda^2 M }{N} \! \sum_{n=1}^N \sum_{ i=1 }^M \mathbb{E} \Big[ \big\Vert \boldsymbol{w}^n_{k,i} - \boldsymbol{w}_k \big\Vert_2^2 \Big] \Big)^{ \! \frac{ \alpha }{ 2 } } \nonumber\\ &= d^{ 1 - \frac{1}{\alpha} } \! \cdot \! \Big( \frac{ \lambda^2 M }{N} \! \sum_{n=1}^N \sum_{ i=1 }^M \mathbb{E} \Big[ \big\Vert \eta_k \sum_{ j = 1 }^i \nabla f_{n} ( \boldsymbol{w}^n_{k,j}; \gamma_j ) \big\Vert_2^2 \Big] \Big)^{ \! \frac{ \alpha }{ 2 } } \nonumber\\ & \leq d^{ 1 - \frac{1}{\alpha} } \lambda^\alpha M^{ 2 \alpha } G^\alpha \end{align} in which ($a$) and ($b$) follow from the Holder's inequality and Jensen's inequality, respectively, and ($c$) is owing to smoothness property of $f_n(\cdot)$ as per Assumption~2. In a similar vein, $Q_3$ can be bounded as: \begin{align} \label{equ:Q3_bound} & Q_3 = \frac{ d^{ 1 - \frac{1}{\alpha} } }{ N^{ \alpha } } \cdot \mathbb{E} \Big[ \Big( \big\Vert \sum_{n=1}^N \big( h_{ n,k } - 1 \big) \bar{ \boldsymbol{g} }^n_k \big\Vert_2^2 \Big)^{ \!\! \frac{ \alpha }{ 2 } } \Big] \nonumber\\ & \leq \frac{ \sigma^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{ \alpha } } \cdot \Big( \sum_{n=1}^N \mathbb{E} \Big[ \big\Vert \sum_{ i = 1 }^M \nabla f_{n} ( \boldsymbol{w}^n_{k,i}; \gamma_i ) \big\Vert_2^2 \Big] \Big)^{ \! \frac{ \alpha }{ 2 } } \nonumber\\ & \leq \frac{ \sigma^\alpha M^\alpha G^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{ \alpha / 2 } }. \end{align} Finally, we substitute \eqref{equ:Q1_bound}, \eqref{equ:Q2_bound}, and \eqref{equ:Q3_bound} into \eqref{equ:Delta_Bound}, which results in: \begin{align} \label{equ:DeltatFnalBnd} &\mathbb{E} \Big[ \big\Vert \Delta_{k+1} \big\Vert_\alpha^\alpha \Big] \leq \left( 1 - \eta_k M L \right) \times \mathbb{E} \Big[ \big\Vert \Delta_k \big\Vert_\alpha^\alpha \Big] \nonumber\\ &\qquad \qquad + 4 \eta_k^\alpha \Big( C + d^{ 1 - \frac{1}{\alpha} } \lambda^\alpha G^\alpha M^{ 2 \alpha } + \frac{ \sigma^\alpha G^\alpha M^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{\alpha/2} } \Big). \end{align} The proof is completed by invoking Lemma~3 in \cite{YanCheQue:21JSTSP} to the above inequality. \subsection{Proof of Lemma~\ref{lma:PrmtUnivBnd} } \label{Apndx:PrmtUnivBnd_proof} Following the update process of local parameters in Algorithm~2, we have \begin{align} \boldsymbol{w}^n_k = \bar{ \boldsymbol{w} }_{ k-D } &- \eta_{ k - D } \, \bar{ \boldsymbol{g} }^n_{ k - D } \nonumber\\ &- \eta_{ k - D + 1 } \, \bar{ \boldsymbol{g} }^n_{ k - D + 1 } - \cdots - \eta_{ k - 1 } \, \bar{ \boldsymbol{g} }^n_{ k - 1 }. \end{align} For any two UEs $m$ and $n$, the following holds: \begin{align} &\mathbb{E} \big[ \Vert \boldsymbol{w}^n_k - \boldsymbol{w}^m_k \Vert^2_2 \big] \nonumber\\ &= \mathbb{E} \big[ \Vert \eta_{ k - D } ( \bar{ \boldsymbol{g} }^n_{ k - D } - \bar{ \boldsymbol{g} }^m_{ k - D }) + \cdots + \eta_{ k - 1 } ( \bar{ \boldsymbol{g} }^n_{ k - 1 } - \bar{ \boldsymbol{g} }^m_{ k - 1 }) \Vert^2_2 \big] \nonumber\\ &\stackrel{(a)}{ \leq } 4 \eta^2_{ k - D } M^2 D^2 G^2 \end{align} where ($a$) follows from the fact that the local parameters of UE $m$ and UE $n$ differ by at most $DM$ SGD iterations. The proof then follows by using Jensen's inequality in the following way: \begin{align} & \mathbb{E} \big[ \Vert \boldsymbol{w}^n_k - \bar{\boldsymbol{w}}_k \Vert^2_2 \big] = \mathbb{E} \big[ \Vert \boldsymbol{w}^n_k - \frac{1}{N} \sum_{ m=1 }^N \boldsymbol{w}^m_k \Vert^2_2 \big] \nonumber\\ &= \mathbb{E} \big[ \Vert \frac{1}{N} \sum_{ m=1 }^N \big( \boldsymbol{w}^n_k - \boldsymbol{w}^m_k \big) \Vert^2_2 \big] \nonumber\\ & \leq \frac{1}{N} \sum_{ m=1 }^N \mathbb{E} \big[ \Vert \big( \boldsymbol{w}^n_k - \boldsymbol{w}^m_k \big) \Vert^2_2 \big] \leq 4 \eta^2_{ k - D } M^2 D^2 G^2. \end{align} \subsection{Proof of Theorem~\ref{thm:ZW_SFWFL_ConvAnals}} \label{Apndx:ZW_SFWFL_ConvAnals_proof} Similar to the proof of Theorem~1, we denote by $\bar{\Delta}_k = \bar{ \boldsymbol{w}}_k - \boldsymbol{w}^*$. The model training procedure in Algorithm~2 stipulates the following relationship: \begin{align} \bar{\Delta}_{ k + 1 } = \bar{\Delta}_k - \eta_k \sum_{ n=1 }^N \bar{ \boldsymbol{g} }^n_k + \eta_{ k - D } \Big( \frac{ 1 }{ N } \sum_{ n = 1 }^N \bar{ \boldsymbol{g} }^n_{ k - D } - \tilde{ \boldsymbol{g} }_{ k - D } \Big). \end{align} Taking similar steps as \eqref{equ:Delta_Bound}, we arrive at the following: \begin{align} \label{equ:Delta_Bar_Bound} &\mathbb{E} \Big[ \big\Vert \bar{ \Delta }_{k+1} \big\Vert_\alpha^\alpha \Big] \leq \underbrace{ \mathbb{E} \Big[ \big\Vert \bar{ \Delta }_k - \eta_k M \nabla f( \bar{ \boldsymbol{w} }_k ) \big\Vert_\alpha^\alpha \Big] }_{Q_4} \nonumber\\ &+ 4 \eta_{ k - D }^\alpha \underbrace{ \mathbb{E} \Big[ \big\Vert \frac{1}{N} \sum_{ n = 1 }^N \big( h_{n,k - D} - 1 \big) \bar{ \boldsymbol{g} }^n_{ k - D } \big\Vert_\alpha^\alpha \Big] }_{Q_5} \nonumber\\ &+ 4 \eta_k^\alpha \underbrace{ \mathbb{E} \Big[ \big\Vert \frac{1}{N} \sum_{ n = 1 }^N \big(\, \bar{ \boldsymbol{g} }^n_k - M \nabla f( \boldsymbol{w}_k ) \big) \big\Vert_\alpha^\alpha \Big] }_{Q_6} +\, 4 C \eta_{ k - D }^\alpha. \end{align} Using the results in Theorem~1, we have \begin{align} Q_4 \leq ( 1 - \eta_k M L ) \times \mathbb{E} \Big[ \big\Vert \bar{\Delta}_k \big\Vert_\alpha^\alpha \Big] \end{align} and \begin{align} Q_5 \leq \frac{ \sigma^\alpha M^\alpha G^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{ \alpha / 2 } }. \end{align} The quality $Q_6$ can be bounded as follows: \begin{align} & Q_6 = \mathbb{E} \Big[ \big\Vert \frac{1}{N} \sum_{n=1}^N \sum_{ i=1 }^M \big( \nabla f_n ( \boldsymbol{w}^n_{k,i}; \gamma_i ) - \nabla f_n ( \bar{ \boldsymbol{w} }_k ) \big) \big\Vert_\alpha^\alpha \Big] \nonumber\\ & \leq d^{ 1 - \frac{1}{\alpha} } \! \cdot \! \Big( \frac{1}{N} \! \sum_{n=1}^N \mathbb{E} \Big[ \big\Vert \sum_{ i=1 }^M \! \big( \nabla f_n ( \boldsymbol{w}^n_{k,i}; \gamma_i ) -\! \nabla f_n ( \bar{ \boldsymbol{w} }_k ) \big) \big\Vert_2^2 \Big] \Big)^{ \! \frac{ \alpha }{ 2 } } \nonumber\\ &\stackrel{(a)}{ \leq } \! d^{ 1 - \frac{1}{\alpha} } \! \cdot \! \Big( \frac{ \lambda^2 M }{N} \! \sum_{n=1}^N \sum_{ i=1 }^M \mathbb{E} \Big[ \big\Vert \boldsymbol{w}^n_{k,i} - \boldsymbol{w}_k \big\Vert_2^2 \Big] \Big)^{ \! \frac{ \alpha }{ 2 } } \nonumber\\ & \leq d^{ 1 - \frac{1}{\alpha} } \lambda^\alpha 2^\alpha ( 1 + D )^\alpha G^\alpha M^{ 2 \alpha } \end{align} where ($a$) results from Lemma~3. Putting the above bounds into \eqref{equ:Delta_Bar_Bound}, we obtain the following: \begin{align} \label{equ:BarDeltatFnalBnd} &\mathbb{E} \Big[ \big\Vert \bar{ \Delta }_{k+1} \big\Vert_\alpha^\alpha \Big] \leq \left( 1 - \eta_k M L \right) \times \mathbb{E} \Big[ \big\Vert \bar{ \Delta }_k \big\Vert_\alpha^\alpha \Big] + 4 \eta_{k-D}^\alpha C \nonumber\\ & + 4 \eta_{ k }^\alpha d^{ 1 - \frac{1}{\alpha} } \lambda^\alpha 2^\alpha ( 1 \!+\! D )^\alpha G^\alpha \! M^{ 2 \alpha } \!+\! 4 \eta_{k-D}^\alpha \frac{ \sigma^\alpha G^\alpha \! M^\alpha d^{ 1 - \frac{1}{\alpha} } }{ N^{\alpha/2} }. \end{align} By assigning $\eta_k = \theta/k$, we have $\eta_{ k - D } \approx \eta_k$ for $k \gg 1$. Then, Theorem~2 follows by applying Lemma~3 of \cite{YanCheQue:21JSTSP} to the above. \end{appendix} \bibliographystyle{IEEEtran} \bibliography{bib/StringDefinitions,bib/IEEEabrv,bib/howard_SFWFL} \end{document}
0.001997
TITLE: Joint probability of dependent continuous random variables QUESTION [1 upvotes]: I was reading through wikipedia about joint probability. I found out that for all random variables $$ f_{Y\mid X = x}(y) = \frac{f_{X,Y}(x,y)}{f_X(x)} $$ However, nowhere was I able to find how to algrebraically evaluate $ f_{Y\mid X = x}(y) $ or $ f_{X,Y}(x,y) $ except in terms of eachothers. I tried deducing $ f_{X,Y}(x,y) $ myself where $ Y $ is a linear combination of the randoms $X$ and $Z$. This is where I got. $$ Y = X + Z $$ $$ f_{Y\mid X = x}(y) = \frac{f_{X,Y}(x,y)}{f_X(x)} $$ $$ f_{X,Y}(x,y) = P(X = x \cap Y = y) $$ $$ f_{X,Y}(x,y) = P(X = x \cap X + Z = y) $$ $$ f_{X,Y}(x,y) = P(X = x \cap Z + x = y) $$ $$ f_{X,Y}(x,y) = P(X = x \cap Z = y - x) $$ $$ f_{X,Y}(x,y) = f_{X,Z}(x,y - x) $$ $$ f_{X,Y}(x,y) = f_X(x) f_Z(y - x) $$ $$ f_{Y\mid X = x}(y) = f_Z(y - x) $$ It seems to make sense to me, although I'm most uncertain about the substitution of $X$ for $x$. Is this allowed between the arguments of the union, or is there something that makes it different because $X = x$ and $Y = y$ are events rather than equations? Thanks for your time. REPLY [1 votes]: Probability densities are not probability measures, so you cannot equate them. $$~f_{Y,X}(y,x)\not=P(Y=y\cap X=x)~$$ However, the intuition is reasonably correct and you can manipulate them in an analogous manner; which is not always quite straightforward: We use a change of variable transformation; based on the chain rule of calculus. When $z=g(y,x)$ this is: $$\begin{align}f_{Y,X}(y,x) = f_{Z,X}(g(y,x),x)~{\Big\lvert\dfrac{\partial g(y,x)}{\partial y}\Big\rvert} \end{align}$$ In this case $g(y,x)=y-x$ so it is simply: $$f_{Y,X}(y,x) ~=~ f_{Z,X}(y-x,x)$$ However when, for example, $Y=2Z+X$ , then we use $g(y,x)=\tfrac {y-x}2$ so $$f_{Y,X}(y,x) ~=~ \tfrac 12 f_{Z,X}(\tfrac {y-x}2,x)$$ [This should make sense when you realise that if continuous random variable $Y$ is spread twice as far as $Z$ (for any $X$), then its density will be halved.]
0.206746
Journaling: My family matters most to me. I identify myself as mother and wife. I am happiest when I am with those I love. I enjoy thunderstorms- especially late at night. They help me fall asleep. I would live in my pajamas if I could. I adore my nieces and nephews; they always make me laugh. I struggle with my faith and have yet to find a church that makes me feel as if I belong. I drink too much coffee. I look forward to Thursday nights- family tv night. I love to create. Someday, I want to be able to write again. (2010) Credits: Katie Pertiet- • Drop Shadow Styles • Classic Cardstock Cleansing • Collection Manuscrite papers • Carded Stacked Frame • Chipboard Alpha. Black • Lil Bit Tags • Long Strips Alpha (Marker) • Tied Fasteners Anna Aspnes- • Torn and Tattered Scallop • Foto Blendz Pattie Knox- • Shimmer Bits • Have a Heart • Pin-Its • Absolute Acrylic Clocks • Ripped Flowers Mindy Terasawa- • Pear Crisp Paper Emily Powers- • Word Bits • Puddle Jumper Kit
0.004971
\begin{document} \begin{abstract} In this paper we study a key example of a Hermitian symmetric space and a natural associated double flag variety, namely for the real symplectic group $G$ and the symmetric subgroup $L$, the Levi part of the Siegel parabolic $P_S$. We give a detailed treatment of the case of the maximal parabolic subgroups $Q$ of $L$ corresponding to Grassmannians and the product variety of $G/P_S$ and $L/Q$; in particular we classify the $L$-orbits here, and find natural explicit integral transforms between degenerate principal series of $L$ and $G$. \end{abstract} \maketitle \section*{Introduction} The geometry of flag varieties over the complex numbers, and in particular double flag varieties, have been much studied in recent years (see, e.g., \cite{Fresse.Nishiyama.2016}, \cite{Henderson.Trapa.2012}, \cite{Travkin.2009}, \cite{FGT.2009} etc.). In this paper we focus on a particular case of a \emph{real} double flag variety with the purpose of understanding in detail (1) the orbit structure under the natural action of the smaller reductive group (2) the construction of natural integral transforms between degenerate principal series representations, equivariant for the same group. Even though aspects of (1) are known from general theory (e.g., \cite{Kobayashi.Matsuki.2014}, \cite{Kobayashi.T.Oshima.2013} and references therein), the cases we treat here provide new and explicit information; and for (2) we also find new phenomena, using the theory of prehomogeneous vector spaces and relative invariants. In particular the Hermitian case we study has properties complementary to other well-known cases of (2). For this, we refer the readers to \cite{Kobayashi.Speh.2015}, \cite{Moellers.Orsted.Oshima.2016}, \cite{Kobayashi.Orsted.Pevzner.2011}, \cite{CKOP.2011}, \cite{Genkai.2009}, \cite{Said.Koufany.Genkai.2014} among others. Thus in this paper we study a key example of a Hermitian symmetric space and a natural associated double flag variety, namely for the real symplectic group $G$ and the symmetric subgroup $L$, the Levi part of the Siegel parabolic $P_S$. We give a detailed treatment of the case of the maximal parabolic subgroup $Q$ of $L$ corresponding to Grassmannians and the product variety of $G/P_S$ and $L/Q$; in particular we classify the open $L$-orbits here, and find natural explicit integral transforms between degenerate principal series of $L$ and $G$. We realize these representations in their natural Hilbert spaces and determine when the integral transforms are bounded operators. As an application we also obtain information about the occurrence of finite-dimensional representations of $L$ in both of these generalized principal series representations of $G$ resp. $L$. It follows from general principles, that our integral transforms, depending on two complex parameters in certain half-spaces, may be meromorphically continued to the whole parameter space; and that the residues will provide kernel operators (of Schwartz kernel type, possibly even differential operators), also intertwining (i.e., $L$-equivariant). For general background on integral operators depending meromorphically on parameters, and for equivariant integral operators -- introduced by T.~Kobayashi as symmetry-breaking operators -- as we study here, see \cite{Kobayashi.Speh.2015}, \cite{Moellers.Orsted.Oshima.2016} and \cite{Kashiwara.Kawai.1979}. However, we shall not pursue this aspect here, and it is our future subject. It will be clear, that the structure of our example is such that other Hermitian groups, in particular of tube type, will be amenable to a similar analysis; thus we contend ourselves here to give all details for the symplectic group only. \bigskip Let us fix notations and explain the content of this paper more explicitly. So let $ G = \Sp_{2n}(\R) $ be a real symplectic group. We denote a symplectic vector space of dimension $ 2 n $ by $ V = \R^{2n} $ with a natural symplectic form defined by $ \langle u, v \rangle = \transpose{u} J_n v $, where $ J_n = \begin{pmatrix} 0 & - 1_n \\ 1_n & 0 \end{pmatrix} $. Thus, our $ G $ is identified with $ \Sp(V) $. Let $ V^+ = \spanr{ e_1, e_2, \dots, e_n } $ spanned by the first $ n $ fundamental basis vectors, which is a Lagrangian subspace of $ V $. Similarly, we put $ V^- = \spanr{ e_n, e_{n + 1}, \dots , e_{2 n} } $, a complementary Lagrangian subspace to $ V^+ $, and we have a complete polarization $ V = V^+ \oplus V^- $. The Lagrangians $ V^+ $ and $ V^- $ are dual to each other by the symplectic form, so that we can and often do identify $ V^- = (V^+)^{\ast} $. Let $ P_S = \Stab_G(V^+) = \{ g \in G \mid g V^+ = V^+ \} $ be the stabilizer of the Lagrangian subspace $ V^+ $. Then $ P_S $ is a maximal \psg of $ G $ with Levi decomposition $ P_S = L \ltimes N $, where $ L = \Stab_G(V^+) \cap \Stab_G(V^-) $, the stabilizer of the polarization, and $ N $ is the unipotent radical of $ P_S $. We call $ P_S $ a Siegel \psg[]. Since $ G = \Sp(V) $ acts on Lagrangian subspaces transitively, $ \Lambda := G/P_S $ is the collection of all Lagrangian subspaces in $ V $. We call this space a Lagrangian flag variety and also denote it by $ \LGrass(\R^{2n}) $. The Levi subgroup $ L $ of $ P_S $ is explicitly given by \begin{equation*} L = \Bigl\{ \begin{pmatrix} a & 0 \\ 0 & \transpose{a}^{-1} \end{pmatrix} \Bigm| a \in \GL_n(\R) \Bigr\} \simeq \GL_n(\R) , \end{equation*} and we consider it to be $ \GL(V^+) $ which acts on $ V^- = (V^+)^{\ast} $ in the contragredient manner. The unipotent radical $ N $ of $ P_S $ is realized in the matrix form as \begin{equation*} N = \Bigl\{ \begin{pmatrix} 1 & z \\ 0 & 1 \end{pmatrix} \Bigm| z \in \Sym_n(\R) \Bigr\} \simeq \Sym_n(\R) \end{equation*} via the exponential map. Note that $ \begin{pmatrix} a & b \\ 0 & \transpose{a}^{-1} \end{pmatrix} \in P_S $ if and only if $ a \transpose{b} \in \Sym_n(\R) $, which in turn equivalent to $ a^{-1} b \in \Sym_n(\R) $. Take a maximal \psg $ Q $ in $ L = \GL(V^+) $ which stabilizes $ d $-dimensional isotropic space $ U \subset V^+ $. Then $ \Xi_d := L/Q = \Grass_d(V^+) = \Grass_d(\R^n) $ is the Grassmannian of $ d $-dimensional spaces. Note that, in the standard realization, \begin{equation*} Q = P_{(d, n - d)}^{\GL} = \Bigl\{ \begin{pmatrix} \alpha & \xi \\ 0 & \beta \end{pmatrix} \Bigm| \alpha \in \GL_d(\R), \beta \in \GL_{n -d}(\R), \xi \in \Mat_{d, n -d}(\R) \Bigr\} . \end{equation*} Now, our main concern is a double flag variety $ X = \Lambda \times \Xi_d = G/P_S \times L/Q $ on which $ L = \GL_n(\R) $ acts diagonally. We are strongly interested in the orbit structure of $ X $ under the action of $ L $ and its applications to representation theory. \begin{goal} We will consider the following problems. \begin{thmenumerate} \item To prove there are finitely many $ L $-orbits on the double flag variety $ X = \Lambda \times \Xi_d $. We will give a complete classification of open orbits, and recursive strategy to determine the whole structure of $ L $-orbits on $ X $. See Theorems~\ref{theorem:finiteness-L-orbits} and \ref{theorem:classification-open-orbits-dfv}. \item To construct relative invariants on each open orbits. We will use them to define integral transforms between degenerate principal series representations of $ L $ and that of $ G $. For this, see \S~\ref{section:intertwiners}, especially Theorems~\ref{thm:conv-integral-operator-P} and \ref{theorem:intertwiner-Q-to-P}. \end{thmenumerate} \end{goal} Here we will make a short remark on the double flag varieties over the \emph{complex} number field (or, more correctly, over an algebraically closed fields of characteristic zero). Let us complexify everything which appears in the setting above, so that $ \GC = \Sp_{2n}(\C) $ and $ \LC \simeq \GL_n(\C) $. The complexifications of the parabolics are $ \PSC $, the stabilizer of a Lagrangian subspace in the symplectic vector space $ \C^{2n} $, and $ \QC $, the stabilizer of a $ d $-dimensional vector space in $ \C^n $. Then it is known that the double flag variety $ \XC = \GC/\PSC \times \LC/ \QC $ has finitely many $ \LC $-orbits or $ \# \QC \backslash \GC / \PSC < \infty $. In this case, one can replace the maximal parabolic $ \QC $ by a Borel subgroup $ \BLC $ of $ \LC $, and still there are finitely many $ \LC $ orbits in $ \GC/\PSC \times \LC/\BLC $ (see \cite{NO.2011} and \cite[Table~2]{HNOO.2013}). Even if there are only finitely many orbits of a complex algebraic group, say $ \LC $, acting on a smooth algebraic variety, there is no guarantee for finiteness of orbits of real forms in general \footnote{ It is known that there is a canonical bijection $ L(\R)\backslash (L/H)(\R) = \ker(H^1(C; H) \to H^1(C; L)) $, where $ C = \Gal(\C/\R) $ and $ H^1(C; H) $ denotes the first Galois cohomology group. See \cite[Eq.~(II.5.6)]{Borel.Ji.2006}.}. So our problem over reals seems impossible to be deduced from the results over $ \C $. On the other hand, in the case of the complex full flag varieties, there exists a famous bijection between $ \KC $ orbits and $ \GR $ orbits called Matsuki correspondence \cite{Matsuki.1988}. Both orbits are finite in number. In the case of double flag varieties, there is no such known correspondences. It might be interesting to pursue such correspondences. Toshiyuki Kobayashi informed us that the finiteness of orbits $ \# X/ L < \infty $ also follows from general results on visible actions \cite{Kobayashi.2005}. We thank him for his kind notice. \bigskip \noindent \textbf{Acknowledgement.} K.~N.\ thanks Arhus University for its warm hospitality during the visits in August 2015 and 2016. Most of this work has been done in those periods. \section{Elementary properties of $ G = \Sp_{2n}(\R) $} In this section, we will give very well known basic facts on the symplectic group for the sake of fixing notations. We define \begin{equation*} G = \Sp_{2n}(\R) = \{ g \in \GL_{2n}(\R) \mid \transpose{g} J_n g = J_n \} \quad \text{ where } J_n = \mattwo{0}{-1_n}{1_n}{0} . \end{equation*} The following lemmas are quite elementary and well known. We just present them because of fixing notations. \begin{lemma}\label{lemma:g-belongs-to-Sp} If we write $ g = \mattwo{a}{b}{c}{d} \in \GL_{2n}(\R) $, then $ g $ belongs to $ G $ if and only if $ \transpose{a} c , \; \transpose{b} d \in \Sym_n(\R) $ and $ \transpose{a} d - \transpose{c} b = 1 $. \end{lemma} \begin{proof} We rewrite $ \transpose{g} J g = J $ by coordinates, and get \begin{align*} \transpose{c} a - \transpose{a} c &= 0 & \transpose{c} b - \transpose{a} d &= - 1 \\ \transpose{d} a - \transpose{b} c &= 1 & \transpose{d} b - \transpose{b} d &= 0 \end{align*} which shows the lemma. \end{proof} \begin{lemma}\label{lemma:inverse-symplectic-matrix} If we write $ g = \mattwo{a}{b}{c}{d} \in G $, then $ g^{-1} = \mattwo{\transpose{d}}{-\transpose{b}}{-\transpose{c}}{\transpose{a}} $. \end{lemma} \begin{proof} Since $ \transpose{g} J g = J $, we get $ g^{-1} = J^{-1} \transpose{g} J = - J \transpose{g} J = \mattwo{\transpose{d}}{-\transpose{b}}{-\transpose{c}}{\transpose{a}} $. \end{proof} \begin{lemma}\label{lemma:conjugation-of-psg-elements} If we write $ g = \mattwo{a}{b}{c}{d} \in G $ and $ p = \mattwo{x}{z}{0}{y} \in P_S $, then \begin{equation} g^{-1} p g = \begin{pmatrix} \transpose{d} x a + \transpose{d} z c - \transpose{b} y c & \transpose{d} x b + \transpose{d} z d - \transpose{b} y d \\ - \transpose{c} x a - \transpose{c} z c + \transpose{a} y c & - \transpose{c} x b - \transpose{c} z d + \transpose{a} y d \end{pmatrix} . \end{equation} Note that, in fact, $ y = \transpose{x}^{-1} $. \end{lemma} \begin{proof} Just a calculation, using Lemma~\ref{lemma:inverse-symplectic-matrix}. \end{proof} A maximal compact subgroup $ K $ of $ G $ is given by $ K = \Sp_{2n}(\R) \cap \OO(2n) $. \begin{lemma}\label{lemma:max-compact-subgroup} An element $ g = \mattwo{a}{b}{c}{d} \in G $ belongs to $ K $ if and only if \begin{align*} & b = - c , \; d = a , \\ & \transpose{a} b \in \Sym_n(\R) , \;\; \text{ and } \;\; \transpose{a} a + \transpose{b} b = 1_n \end{align*} hold. Consequently, \begin{equation} K = \left\{ \begin{pmatrix} a & b \\ -b & a \end{pmatrix} \Bigm| a + i b \in \U(n) \right\}. \end{equation} \end{lemma} \begin{proof} $ g \in G $ belongs to $ \OO(2n) $ if and only if $ \transpose{g} = g^{-1} $. From Lemma~\ref{lemma:inverse-symplectic-matrix}, we get $ a = d $ and $ c = - b $. From~\ref{lemma:g-belongs-to-Sp}, we get the rest two equalities. Note that if $ a + ib \in \U(n) $ \begin{align*} (a + i b)^{\ast} \, (a + i b) &= (\transpose{a} - i \transpose{b}) \, (a + i b ) \\ &= (\transpose{a} a + \transpose{b} b ) + i ( \transpose{a} b - \transpose{b} a ) = 1_n . \end{align*} This last formula is equivalent to the above two equalities. \end{proof} \section{$ L $-orbits on the Lagrangian flag variety $ \Lambda $} \label{section:L-orbits-on-Lagrangian-Grassmannian} Now, let us begin with the investigation of $ L $ orbits on $ \Lambda = G/ P_S $, which should be well-known. Let us denote the Weyl group of $ P_S $ by $ W_{P_S} $, which is isomorphic to $ S_n $, the symmetric group of $ n $-th order. In fact, it coincides with the Weyl group of $ L $. By Bruhat decomposition, we have \begin{equation} G/P_S = \bigcup_{w \in W_G} P_S w P_S/P_S = \bigsqcup_{w \in W_{P_S} \backslash W_G/ W_{P_S}} P_S w P_S/P_S, \end{equation} where in the second sum $ w $ moves over the representatives of the double cosets. The double coset space $ W_{P_S} \backslash W_G/ W_{P_S} \simeq S_n \backslash (S_n \ltimes (\Z/ 2 \Z)^n) / S_n $ has a complete system of representatives of the form \begin{equation*} \{ w_k = (1, \dots, 1, -1, \dots, -1) = (1^k, (-1)^{n - k}) \mid 0 \leq k \leq n \} \subset (\Z/ 2 \Z)^n . \end{equation*} We realize $ w_k $ in $ G $ as \begin{equation} w_k = \left( \begin{array}{cc|cc} & & {-1_{n -k}} & \\ & {1_k} & & \\ \hline {1_{n -k}} & & & \\ & & & {1_k} \end{array} \right) . \end{equation} \begin{lemma}\label{lemma:affine-open-in-kth-Bruhat-cell} For $ 0 \leq k \leq n $, we temporarily write $ w = w_k $. Then $ P_S w P_S/ P_S = w (w^{-1} P_S w) P_S/P_S \simeq w^{-1} P_S w /(w^{-1} P_S w \cap P_S ) $ contains $ N_w $ given below as an open dense subset. \begin{equation} \label{eq:open-affine-space-Nw} N_w = \left\{ \begin{pmatrix} 1_n & 0 \\ \eta & 1_n \end{pmatrix} \Bigm| \eta = \begin{pmatrix} \zeta & \xi \\ \transpose{\xi} & 0_k \end{pmatrix}, \; \zeta \in \Sym_{n - k}(\R), \, \xi \in \Mat_{n - k, k}(\R) \right\} \end{equation} \end{lemma} \begin{proof} Take $ \mattwo{x}{z}{0}{y} \in P_S $ and write $ w = \mattwo{a}{b}{c}{d} $, where $ a = d = \mattwo{0}{0}{0}{1_k} , \; c = - b = \mattwo{1_{n - k}}{0}{0}{0} $. Then, using the formula in Lemma~\ref{lemma:conjugation-of-psg-elements}, we can calculate as \begin{align} w^{-1} \mattwo{x}{z}{0}{y} w &= \begin{pmatrix} \transpose{d} x a + \transpose{d} z c - \transpose{b} y c & \transpose{d} x b + \transpose{d} z d - \transpose{b} y d \\ - \transpose{c} x a - \transpose{c} z c + \transpose{a} y c & - \transpose{c} x b - \transpose{c} z d + \transpose{a} y d \end{pmatrix} \\ &= \begin{pmatrix} x_{22} + z_{21} + y_{11} & - x_{21} + z_{22} + y_{12} \\ - x_{12} - z_{11} + y_{21} & x_{11} - z_{12} + y_{22} \end{pmatrix} \\ &= \begin{pmatrix} \begin{array}{cc|cc} y_{11} & 0 & 0 & y_{12} \\ z_{21} & x_{22} & -x_{21} & z_{22} \\ \hline -z_{11} & -x_{12} & x_{11} & -z_{12} \\ y_{21} & 0 & 0 & y_{22} \end{array} \end{pmatrix} \label{eq:conjugation-wpw} \end{align} Let us rewrite the last formula in the form \begin{equation*} \mattwo{1}{0}{\eta}{1} \mattwo{\alpha}{\beta}{0}{\delta} = \mattwo{\alpha}{\beta}{\eta \alpha}{\eta \beta + \delta} , \end{equation*} so that we get \begin{align} \eta &= \begin{pmatrix} -z_{11} & -x_{12} \\ y_{21} & 0 \end{pmatrix} \begin{pmatrix} y_{11} & 0 \\ z_{21} & x_{22} \end{pmatrix}^{-1} \notag \\ &= \begin{pmatrix} -z_{11} & -x_{12} \\ y_{21} & 0 \end{pmatrix} \begin{pmatrix} y_{11}^{-1} & 0 \\ - x_{22}^{-1} z_{21} y_{11}^{-1} & x_{22}^{-1} \end{pmatrix} \notag \\ &= \begin{pmatrix} -z_{11} y_{11}^{-1} + x_{12} x_{22}^{-1} z_{21} y_{11}^{-1} & - x_{12} x_{22}^{-1} \\ y_{21} y_{11}^{-1} & 0 \end{pmatrix}, \label{eq:last-formula-eta} \end{align} provided that $ y_{11}^{-1} $ and $ x_{22}^{-1} $ exist (an open condition). Note that we can take $ z_{11} $ and $ z_{21} $ arbitrary, and also that, if we put $ x_{21} = 0 $ and $ y_{12} = 0 $, we can take $ x_{12} $ (which determines $ y_{21} $) arbitrary. This shows the last formula \eqref{eq:last-formula-eta} above exhausts $ \eta $ of the form in \eqref{eq:open-affine-space-Nw}. \end{proof} \begin{remark} The formula \eqref{eq:last-formula-eta} actually gives a symmetric matrix. One can check this directly, using $ y = \transpose{x}^{-1} $. See also Lemma~\ref{lemma:L-action-on-affine-open-set-Nw} below. \end{remark} Let us consider $ L = \GL_n(\R) $ action on the $ k $-th Bruhat cell $ P_S w_k P_S/P_S $. It is just the left multiplication. However, if we identify it with $ w^{-1} P_S w/ (w^{-1} P_S w) \cap P_S $ as in Lemma~\ref{lemma:affine-open-in-kth-Bruhat-cell}, the action of $ a \in L $ is given by the left multiplication of $ w^{-1} a w $. This conjugation is explicitly given as \begin{multline} \label{eq:waw-expressed-by-h} w^{-1} a w = \begin{pmatrix} \begin{array}{cc|cc} h'_1 & 0 & 0 & h'_2 \\ 0 & h_4 & -h_3 & 0 \\ \hline 0 & -h_2 & h_1 & 0 \\ h'_3 & 0 & 0 & h'_4 \end{array} \end{pmatrix} \\ \text{ where } a = \mattwo{h}{0}{0}{\transpose{h}^{-1}}, \; h = \mattwo{h_1}{h_2}{h_3}{h_4}, \; \transpose{h}^{-1} = h' = \mattwo{h'_1}{h'_2}{h'_3}{h'_4} , \end{multline} which can be read off from Equation~\eqref{eq:conjugation-wpw}. \begin{lemma}\label{lemma:L-orbits-on-kth-Bruhat-cell-finiteness-representatives} There are exactly $ \binom{n - k + 2}{2} $ of $ L $-orbits on the Bruhat cell $ P_S w_k P_S/P_S \; (0 \leq k \leq n) $. A complete representatives of $ L $-orbits is given as \begin{equation*} \begin{split} \Bigl\{ \mattwo{1}{z}{0}{1} w_k P_S/P_S \Bigm| z = \mattwo{I_{r,s}}{0}{0}{0} \in \Sym_n(\R), \, 0 \leq r+ s \leq n - k \Bigr\} , \\ \quad \text{ where $ I_{r,s} = \diag(1_r, - 1_s) $.} \end{split} \end{equation*} \end{lemma} \begin{proof} For the brevity, we will write $ w = w_k $. Firstly, we observe that by the left multiplication of $ L $ clearly we can choose orbit representatives from the set \begin{equation*} \{ \mattwo{1}{z}{0}{1} w P_S/P_S \mid z \in \Sym_n(\R) \} . \end{equation*} Then, by the calculations in the proof of Lemma~\ref{lemma:affine-open-in-kth-Bruhat-cell} and Equation~\eqref{eq:last-formula-eta}, it reduces to the subset \begin{equation} \biggl\{ \mattwo{1_n}{0}{\eta}{1_n} \biggm| \eta = \mattwo{\zeta}{0}{0}{0}, \; \zeta \in \Sym_{n - k}(\R) \biggr\} \subset w^{-1} P_S w / (w^{-1} P_S w) \cap P_S. \end{equation} Now let us consider the action of $ L $ on this set. Take $ a = \mattwo{h}{0}{0}{\transpose{h}^{-1}} \in L $, where $ h = \diag(h_1, 1_k) \; (h_1 \in \GL_{n - k}(\R)) $. Then, the action of $ a $ is the left multiplication of $ w^{-1} a w $ as explained above (see Equation~\eqref{eq:waw-expressed-by-h}). As a consequence, it brings to \begin{equation*} (w^{-1} a w) \mattwo{1}{0}{\eta}{1} P_S/ P_S = \begin{pmatrix} \begin{array}{cc|cc} h_1' & 0 & & \\ 0 & 1 & & \\ \hline h_1 \zeta & 0 & h_1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \end{pmatrix} P_S/P_S = \begin{pmatrix} \begin{array}{cc|cc} 1 & 0 & & \\ 0 & 1 & & \\ \hline h_1 \zeta \transpose{h_1} & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \end{pmatrix} P_S/P_S. \end{equation*} Now it is well known that for a suitable choice of $ h_1 \in \GL_{n - k}(\R) $, we get \begin{equation*} h_1 \zeta \transpose{h_1} = \mattwo{I_{r,s}}{0}{0}{0} \end{equation*} for a certain signature $ (r, s) $ with $ r + s \leq n - k $. \end{proof} Let us explicitly describe the $ L $-action on $ N_w \subset (w^{-1} P_S w) / (w^{-1} P_S w) \cap P_S $. \begin{lemma}\label{lemma:L-action-on-affine-open-set-Nw} The action of $ w^{-1} a w $ in Equation~\eqref{eq:waw-expressed-by-h} on \begin{equation*} \mattwo{1_n}{0_n}{\eta}{1_n} \in N_w , \quad \eta = \mattwo{\zeta}{\xi}{\transpose{\xi}}{0_k} \quad (\zeta \in \Sym_{n - k}(\R), \; \xi \in \Mat_{n - k, k}(\R)) , \end{equation*} is given by \begin{equation*} \eta = \mattwo{\zeta}{\xi}{\transpose{\xi}}{0} \mapsto a \wdot \eta = \mattwo{A}{B}{\transpose{B}}{0} \quad \text{ where } \quad \Biggl\{ \begin{array}{l} A = (h_1 + B h_3) \zeta \transpose{(h_1 + B h_3)}, \\[1.2ex] B = (- h_2 + h_1 \xi) (h_4 - h_3 \xi)^{-1} . \end{array} \end{equation*} So the action on $ \xi $-part is linear fractional, while action on $ \zeta $-part is a mixture of unimodular and linear fractional action. \end{lemma} \begin{proof} Take $ a \in L $ as in Equation~\eqref{eq:waw-expressed-by-h} and we use the formula of $ w^{-1} a w $ there. \begin{align*} w^{-1} a w & \, \begin{pmatrix} \begin{array}{cc|cc} 1_{n - k} & 0 & & \\ 0 & 1_k & & \\ \hline \zeta & \xi & 1_{n - k} & 0 \\ \transpose{\xi} & 0 & 0 & 1_k \end{array} \end{pmatrix} = \begin{pmatrix} \begin{array}{cc|cc} h'_1 & 0 & 0 & h'_2 \\ 0 & h_4 & -h_3 & 0 \\ \hline 0 & -h_2 & h_1 & 0 \\ h'_3 & 0 & 0 & h'_4 \end{array} \end{pmatrix} \begin{pmatrix} \begin{array}{cc|cc} 1_{n - k} & 0 & & \\ 0 & 1_k & & \\ \hline \zeta & \xi & 1_{n - k} & 0 \\ \transpose{\xi} & 0 & 0 & 1_k \end{array} \end{pmatrix} \\ &= \begin{pmatrix} \begin{array}{cc|cc} h'_1 + h'_2 \transpose{\xi} & 0 & 0 & h'_2 \\ -h_3 \zeta & h_4 - h_3 \xi & -h_3 & 0 \\ \hline h_1 \zeta & -h_2 + h_1 \xi & h_1 & 0 \\ h'_3 + h'_4 \transpose{\xi} & 0 & 0 & h'_4 \end{array} \end{pmatrix} \\ &=: \mattwovbar{1}{0}{\eta}{1} \mattwovbar{\alpha}{\beta}{0}{\delta} = \mattwovbar{\alpha}{\beta}{\eta \alpha}{\eta \beta + \delta} . \end{align*} From this, we calculate \begin{align*} \eta &= \begin{pmatrix} \begin{array}{cc} h_1 \zeta & -h_2 + h_1 \xi \\ h'_3 + h'_4 \transpose{\xi} & 0 \end{array} \end{pmatrix} \begin{pmatrix} \begin{array}{cc} h'_1 + h'_2 \transpose{\xi} & 0 \\ -h_3 \zeta & h_4 - h_3 \xi \end{array} \end{pmatrix}^{-1} \\ &= \begin{pmatrix} \begin{array}{cc} h_1 \zeta & -h_2 + h_1 \xi \\ h'_3 + h'_4 \transpose{\xi} & 0 \end{array} \end{pmatrix} \begin{pmatrix} \begin{array}{cc} (h'_1 + h'_2 \transpose{\xi})^{-1} & 0 \\ (h_4 - h_3 \xi)^{-1} h_3 \zeta (h'_1 + h'_2 \transpose{\xi})^{-1} & (h_4 - h_3 \xi)^{-1} \end{array} \end{pmatrix} \\ &=: \mattwo{A}{B}{C}{0}, \end{align*} where \begin{align*} A &= h_1 \zeta (h'_1 + h'_2 \transpose{\xi})^{-1} + (-h_2 + h_1 \xi) (h_4 - h_3 \xi)^{-1} h_3 \zeta (h'_1 + h'_2 \transpose{\xi})^{-1} , \\ B &= (-h_2 + h_1 \xi) (h_4 - h_3 \xi)^{-1} , \\ C &= ( h'_3 + h'_4 \transpose{\xi}) (h'_1 + h'_2 \transpose{\xi})^{-1} . \end{align*} We will rewrite these formulas neatly. Firstly, we notice it should hold $ B = \transpose{C} $. Let us check it. For this, we compare $ h \mattwo{1}{-\xi}{0}{1} $ and $ \transpose{h}^{-1} \mattwo{1}{0}{\transpose{\xi}}{1} $. Using notation $ h' = \transpose{h}^{-1} $, we calculate both as \begin{align} \label{eq:h1-x01} & h \mattwo{1}{-\xi}{0}{1} = \mattwo{h_1}{h_2}{h_3}{h_4} \mattwo{1}{-\xi}{0}{1} = \begin{pmatrix} \begin{array}{cc} h_1 & - h_1 \xi + h_2 \\ h_3 & - h_3 \xi + h_4 \end{array} \end{pmatrix} \\ & \transpose{h}^{-1} \mattwo{1}{0}{\transpose{\xi}}{1} = h' \mattwo{1}{0}{\transpose{\xi}}{1} = \mattwo{h'_1}{h'_2}{h'_3}{h'_4} \mattwo{1}{0}{\transpose{\xi}}{1} = \begin{pmatrix} \begin{array}{cc} h'_1 + h'_2 \transpose{\xi} & h'_2 \\ h'_3 + h'_4 \transpose{\xi} & h'_4 \end{array} \end{pmatrix} \notag \\ \intertext{ $ \therefore \;\; $ taking transpose, } \label{eq:1x01h-1} & \mattwo{1}{\xi}{0}{1} h^{-1} = \begin{pmatrix} \begin{array}{cc} \transpose{h'_1} + \xi \transpose{h'_2} & \transpose{h'_3} + \xi \transpose{h'_4} \transpose{\xi} \\ \transpose{h'_2} & \transpose{h'_4} \end{array} \end{pmatrix} \end{align} Since \eqref{eq:h1-x01} and \eqref{eq:1x01h-1} are mutually inverse, we get \begin{align} \label{eq:hxixih-11} & (\transpose{h'_1} + \xi \transpose{h'_2}) h_1 + (\transpose{h'_3} + \xi \transpose{h'_4} \transpose{\xi}) h_3 = 1_{n - k} , \\ \label{eq:hxixih-12} & (\transpose{h'_1} + \xi \transpose{h'_2}) (- h_1 \xi + h_2) + (\transpose{h'_3} + \xi \transpose{h'_4} \transpose{\xi}) (- h_3 \xi + h_4) = 0 , \\ \label{eq:hxixih-21} & \transpose{h'_2} h_1 + \transpose{h'_4} h_3 = 0 , \\ \label{eq:hxixih-22} & \transpose{h'_2} (- h_1 \xi + h_2) + \transpose{h'_4} (- h_3 \xi + h_4) = 1_k , \\ \intertext{and taking transpose of Equation \eqref{eq:hxixih-11}, } \label{eq:hxixih-11-transposed} & \transpose{h_1} (h'_1 + h'_2 \transpose{\xi}) + \transpose{h_3} (h'_3 + h'_4 \transpose{\xi}) = 1_{n - k} . \end{align} Now, we calculate \begin{multline*} \transpose{C} = \transpose{\Bigl[ (h'_3 + h'_4 \transpose{\xi}) (h'_1 + h'_2 \transpose{\xi})^{-1} \Bigr]} \\ = (\transpose{h'_1} + \xi \transpose{h'_2})^{-1} (\transpose{h'_3} + \xi \transpose{h'_4} \transpose{\xi}) = - (- h_1 \xi + h_2) (- h_3 \xi + h_4)^{-1} = B , \end{multline*} where in the last equality we use Equation \eqref{eq:hxixih-12}. This also proves the formula for linear fractional action on $ \xi $. Secondly, we check that $ A $ is symmetric. \begin{align*} A &= h_1 \zeta (h'_1 + h'_2 \transpose{\xi})^{-1} + B h_3 \zeta (h'_1 + h'_2 \transpose{\xi})^{-1} = (h_1 + B h_3) \, \zeta \, (h'_1 + h'_2 \transpose{\xi})^{-1} \\ &= (h_1 + B h_3) \, \zeta \Bigl[ \transpose{h_1} + \transpose{h_3} (h'_3 + h'_4 \transpose{\xi}) (h'_1 + h'_2 \transpose{\xi})^{-1} \Bigr] \quad (\text{by Eq.~\eqref{eq:hxixih-11-transposed}}) \\ &= ( h_1 + B h_3 ) \zeta ( \transpose{h_1} + \transpose{h_3} C ) \\ &= (h_1 + B h_3) \zeta \transpose{(h_1 + B h_3)} . \qquad (\because \;\; C = \transpose{B}) \end{align*} This proves that $ A $ is symmetric and at the same time the formula of the action on $ \zeta $ in the lemma. \end{proof} \begin{lemma}\label{lemma:stabilizer-L-zeta} For a representative \begin{equation*} p_{(k; r, s)} := \begin{pmatrix} \begin{array}{cc|cc} 1_{n - k} & 0 & & \\ 0 & 1_k & & \\ \hline \zeta & 0 & 1_{n - k} & 0 \\ 0 & 0 & 0 & 1_k \end{array} \end{pmatrix}, \qquad \zeta = \mattwo{I_{r,s}}{0}{0}{0} \in \Sym_{n - k}(\R) \end{equation*} of $ L $-orbits in the $ k $-th Bruhat cell $ w^{-1} P_S w/ (w^{-1} P_S w) \cap P_S $ (see Lemma~\ref{lemma:L-orbits-on-kth-Bruhat-cell-finiteness-representatives}), the stabilizer is given by \begin{equation}\label{eq:stabilizer-pkrs} \Stab_L(p_{(k; r,s)}) = \Bigl\{ a = \mattwo{h}{0}{0}{\transpose{h}^{-1}} \Bigm| h = \mattwo{h_1}{0}{h_3}{h_4} \in \GL_n(\R), \; h_1 \zeta \transpose{h_1} = \zeta \Bigr\} . \end{equation} Thus an orbit $ \calorbit_{(k; r,s)} $ through $ p_{(k; r, s)} $ is isomorphic to $ \GL_n(\R) / H_{(k; r, s)} $, where $ H_{(k; r,s)} $ is the collection of $ h $ given in Equation~\eqref{eq:stabilizer-pkrs}. \end{lemma} \begin{proof} We put $ \xi = 0 $ in Lemma~\ref{lemma:L-action-on-affine-open-set-Nw}, and assume that $ a \wdot \eta = \eta $. It gives $ B = - h_2 h_4^{-1} = 0 $ and $ A = h_1 \zeta \transpose{h_1} = \zeta $. Here, we assume $ h_4 $ is regular. So, under this hypothesis, we get $ h_2 = 0 $. Since the stabilizer is a closed subgroup, we must have $ h_2 = 0 $ in any case (as a matter of fact, actually $ h_4 $ must be regular). \end{proof} For the later reference, we reinterpret the above lemma by Lagrangian realization. Recall that $ G/P_S $ is isomorphic to the set of Lagrangian subspaces in $ V $ denoted as $ \Lambda $. The isomorphism is explicitly given by $ G/P_S \ni g P_S \mapsto g \cdot V^+ \in \Lambda $, here we identify $ V^+ $ with the space $ \spanr{\eb_1, \dots, \eb_n} $ spanned by the first $ n $ fundamental vectors in $ V = \R^{2n} $. For $ v = \sum_{i = 1}^n c_i \eb_i \in V^+ $, we denote $ v^{(n -k)} = \sum_{i = 1}^{n - k} c_i \eb_i $ and $ v_{(k)} = \sum_{j = 1}^k c_{n -k + j} \eb_{n - k + j} $ so that $ v = v^{(n - k)} + v_{(k)} $. \begin{lemma}\label{lemma:L-orbits-on-Lagrangian-Grassmannian} With the notation introduced above, $ L $-orbits on the Lagrangian Grassmannian $ \Lambda \simeq G/P_S $ has a representatives of the following form. \begin{equation*} V^{(k; r,s)} = \Biggl\{ u = \begin{pmatrix} \begin{array}{c} - \zeta v^{(n -k)} \\ v_{(k)} \\ \hline v^{(n - k)} \\ 0 \end{array} \end{pmatrix} \Bigg| v \in V^+ \Biggr\} , \quad \text{where $ \zeta = \mattwo{I_{r,s}}{0}{0}{0} $.} \end{equation*} Here $ 0 \leq k \leq n $ and $ r, s \geq 0 $ denote the signature which satisfy $ 0 \leq r + s \leq k $. \end{lemma} \begin{proof} By Lemma~\ref{lemma:stabilizer-L-zeta}, we know the representatives of $ L $-orbits in the $ k $-th Bruhat cell $ w^{-1} P_S w/ (w^{-1} P_S w) \cap P_S \;\; (w = w_k) $. They are denoted as $ p_{(k; r,s)} $. The corresponding Lagrangian subspace is obtained by $ w_k \, p_{(k; r,s)} \cdot V^+ $. If we take $ v \in V^+ $ and write it as $ v = v^{(n -k)} + v_{(k)} $ as in just before the lemma, then we obtain \begin{align*} w_k \, p_{(k; r,s)} v &= \begin{pmatrix} \begin{array}{cc|cc} & & -1_{n - k} & \\ & 1_k & & \\ \hline 1_{n - k} & & & \\ & & & 1_k \end{array} \end{pmatrix} \begin{pmatrix} \begin{array}{cc|cc} 1_{n - k} & 0 & & \\ 0 & 1_k & & \\ \hline \zeta & 0 & 1_{n -k} & 0 \\ 0 & 0 & 0 & 1_k \end{array} \end{pmatrix} \begin{pmatrix} \begin{array}{c} v^{(n -k)} \\ v_{(k)} \\ \hline 0 \\ 0 \end{array} \end{pmatrix} \\ &= \begin{pmatrix} \begin{array}{cc|cc} & & -1_{n - k} & \\ & 1_k & & \\ \hline 1_{n - k} & & & \\ & & & 1_k \end{array} \end{pmatrix} \begin{pmatrix} \begin{array}{c} v^{(n -k)} \\ v_{(k)} \\ \hline \zeta v^{(n -k)} \\ 0 \end{array} \end{pmatrix} = \begin{pmatrix} \begin{array}{c} -\zeta v^{(n - k)} \\ v_{(k)} \\ \hline v^{(n - k)} \\ 0 \end{array} \end{pmatrix} , \end{align*} which proves the lemma. \end{proof} \begin{theorem}\label{theorem:finiteness-L-orbits} Let $ B_n \subset L $ be a Borel subgroup of $ L $. A double flag variety $ G/P_S \times L/B_n $ has finitely many $ L $-orbits. In other words, $ G/P_S $ has finitely many $ B_n $-orbits. In this sense, $ G/P_S $ is a real $ L $-spherical variety. \end{theorem} \begin{proof} Firstly, we consider the open Bruhat cell, i.e., the case where $ k = 0 $ and $ w = w_0 = J_n $. The cell is isomorphic to $ w_0^{-1} P_S w_0 / (w_0^{-1} P_S w_0 \cap P_S) \simeq N_{w_0} $, where \begin{equation} N_{w_0} = \Bigl\{ \mattwo{1_n}{0}{z}{1_n} \Bigm| z \in \Sym_n(\R) \Bigr\} \end{equation} and the action of $ h \in \GL_n(\R) \simeq L $ is given by the unimodular action: $ z \mapsto h z \transpose{h} $. So the complete representatives of $ L $-orbits are given by $ \{ z_{r,s} := \diag(1_r, -1_s, 0) \mid r, s \geq 0, \; r + s \leq n \} $. Let $ H_{r,s} \subset L $ be the stabilizer of $ z_{r,s} $ (note that $ H_{r,s} $ is denote as $ H_{(0; r,s)} $ in Lemma~\ref{lemma:stabilizer-L-zeta}. We omit $ 0 $ for brevity). Then an $ L $-orbit in the open Bruhat cell is isomorphic to $ L/H_{r,s} $. What we must prove is that there are only finitely many $ B_n $ orbits on $ L/H_{r,s} $. Direct calculations tell that \begin{equation} H_{r,s} = \Bigl\{ \mattwo{\alpha}{\beta}{0}{\gamma} \in \GL_n(\R) \Bigm| \alpha \in \OO(r,s), \; \gamma \in \GL_{n - (r + s)}(\R) \Bigr\} \subset \GL_n(\R) , \end{equation} where $ \OO(r,s) $ denotes the indefinite orthogonal group preserving a quadratic form defined by $ \diag(1_r, -1_s) $. Note that if $ r + s = n $, we simply get $ H_{r,s} = \OO(r,s) $, which is a symmetric subgroup in $ \GL_n(\R) $. It is well known that a minimal parabolic subgroup $ \miniP $ has finitely many orbits on $ G/H $, where $ G $ is a general connected reductive Lie group, and $ H $ its symmetric subgroup (i.e., an open subgroup of the fixed point subgroup of a non-trivial involution of $ G $). For this, we refer the readers to \cite{Wolf.1974}, \cite{Matsuki.1979}, \cite{Rossmann.1979}. Thus, the Borel subgroup $ B_n $ has an open orbit on $ L/\OO(r,s) $ when $ r + s = n $. This is equivalent to say that $ \lie{b}'_n + \lie{o}(r,s) = \lie{l} $ for some choice $ \lie{b}'_n $ of a Borel subalgebra of $ \lie{l} = \Lie L $. On the other hand, the following is known. \begin{lemma}\label{lemma:exist-open-orbit-implies-finiteness} Let $ G $ be a connected reductive Lie group and $ \miniP $ its minimal \psg[]. For any closed subgroup $ H $ of $ G $, let us consider an action of $ H $ on the flag variety $ G/\miniP $ by the left translation. Then the followings are equivalent. \begin{thmenumerate} \item There are finitely many $ H $-orbits in $ G/\miniP $, i.e., we have $ \# H \backslash G/ \miniP < \infty $. \item There exists an open $ H $-orbit in $ G/\miniP $. \item There exists $ g \in G $ for which $ \Ad g \cdot \lie{h} + \minip = \lie{g} $ holds. \end{thmenumerate} \end{lemma} For the proof of this lemma, see \cite[Remark~2.5 4)]{Kobayashi.T.Oshima.2013}. There is a misprint there, however. So we repeat the remark here. Matsuki \cite{Matsuki.1991} observed that the lemma follows if it is valid for real rank one case, while the real rank one case had been already established by Kimelfeld \cite{Kimelfeld.1987}. See also \cite{Kroetz.Schlichtkrull.2013} for another proof. Now, in the case where $ r + s < n $, since the upper left corner of $ H_{r,s} $ is $ \OO(r,s) $, we can find a Borel subgroup $ \lie{b}'_n $ in $ \lie{l} $ for which $ \lie{b}'_n + \lie{h}_{r,s} = \lie{l} $ holds. By the above lemma, $ H_{r,s} $ has finitely many orbits in $ L/B_n $ or $ \# B_n \backslash L/ H_{r,s} < \infty $. Secondly, let us consider the general Bruhat cell. Then, by Lemma~\ref{lemma:stabilizer-L-zeta}, we know there are finitely many $ L $-orbits and they are isomorphic to $ L/H_{(k;r,s)} $. The Lie algebra of $ H_{(k; r,s)} $ realized in $ \GL_n(\R) $ is of the following form: \begin{equation} \lie{h}_{(k; r,s)} = \Bigl\{ \begin{pmatrix} \begin{array}{c|c} \arraytwo{\alpha}{\beta}{0}{\gamma} & 0 \\ \hline \delta & \eta \end{array} \end{pmatrix} \Bigm| \alpha \in \lie{o}(r,s) \Bigr\} \subset \lie{gl}_n(\R), \end{equation} where $ \mattwo{\alpha}{\beta}{0}{\gamma} \in \lie{gl}_{n - k}(\R) $ and $ \delta \in \Mat_{k, n -k}(\R), \; \eta \in \gl_k(\R) $. Let us choose a Borel subalgebra $ \lie{b}'_{n -k} $ of $ \gl_{n - k}(\R) $ such that \begin{equation*} \lie{b}'_{n -k} + \{ \mattwo{\alpha}{\beta}{0}{\gamma} \mid \alpha \in \lie{o}(r, s) \} = \gl_{n - k}(\R) , \end{equation*} applying the arguments for the open Bruhat cell. Then we can take \begin{equation*} \lie{b}_n = \begin{pmatrix} \lie{b}'_{n - k} & \ast \\ 0 & \lie{b}_k \end{pmatrix} \end{equation*} as a Borel subalgebra of $ \lie{l} $ which satisfies $ \lie{b}_n + \lie{h}_{(k; r,s)} = \lie{l} $. Thus Lemma~\ref{lemma:exist-open-orbit-implies-finiteness} tells that $ B_n $-orbits in $ L/H_{(k; r, s)} $ is finite. \end{proof} \begin{corollary} For any \psg $ Q $ of $ L $, the double flag variety $ X = \Lambda \times \Xi_d = G/P_S \times L/ Q $ has finitely many $ L $-orbits, hence it is of finite type. \end{corollary} \section{Maslov index} In \cite{Kashiwara.Schapira.1994}, Kashiwara and Shapira described the orbit decomposition of the diagonal action of $ G = \Sp_{2n}(\R) $ in the triple product $ \Lambda^3 = \Lambda \times \Lambda \times \Lambda $ of Lagrangian Grassmannians. They used an invariant called Maslov index to classify the orbits and concluded that there are only finitely many orbits, i.e., $ \# \Lambda^3 / G < \infty $. Let us explain the relation of their result and ours. Fix points $ x_{\pm} \in \Lambda $ which are corresponding to the Lagrangian subspaces $ V^{\pm} \subset V $. We consider a $ G $-stable subspace containing $ \{ x_+ \} \times \{ x_- \} \times \Lambda $, namely Put \begin{equation*} Y = G \cdot \Bigl( \{ x_+ \} \times \{ x_- \} \times \Lambda \Bigr) . \end{equation*} Since all the orbits go through a point $ \{ x_+ \} \times \{ x_- \} \times \{ \lambda \} $ for a certain $ \lambda \in \Lambda $, $ G $-orbit decomposition of $ Y $ reduces to orbit decomposition of $ \Stab_G(\{ x_+ \} \times \{ x_- \}) $ in $ \Lambda = G/ P_S $. It is easy to see that the stabilizer $ \Stab_G(\{ x_+ \} \times \{ x_- \}) $ is exactly $ L $ so that $ Y/G \simeq \Lambda/L \simeq L \backslash G/ P_S $, on the last of which we discussed in \S~\ref{section:L-orbits-on-Lagrangian-Grassmannian}. Since $ Y \subset \Lambda^3 $, it has finitely many $ G $-orbits due to \cite{Kashiwara.Schapira.1994}, hence $ \Lambda = G/P_S $ also has finitely many $ L $-orbits. A detailed look at \cite{Kashiwara.Schapira.1994} will also provides the classification of orbits, which we do not carry out here. However, for proving the finiteness of $ B_n $-orbits, we need explicit structure of orbits as homogeneous spaces of $ L $. This is the main point of our analysis in \S~\ref{section:L-orbits-on-Lagrangian-Grassmannian}. \section{Classification of open $ L $-orbits in the double flag variety} Let us return back to the original situation of Grassmannians, i.e., our $ Q = P_{(d, n -d)}^{\GL} \subset L $ is a maximal \psg which stabilizes a $ d $-dimensional subspace in $ V^+ $. So the double flag variety $ X = G/P_S \times L/Q $ is isomorphic to the product of the Lagrangian Grassmannian $ \Lambda = \LGrass(\R^{2n}) $ and the Grassmannian $ \Xi_d = \Grass_d(\R^n) $ of $ d $-dimensional subspaces. In this section, we will describe open $ L $-orbits in $ X $. To study $ L $-orbits in $ X = G/P_S \times L/Q $, we use the identification \begin{equation*} X/L \simeq Q \backslash G/ P_S \simeq \Lambda/Q . \end{equation*} In this identification, open $ L $-orbits corresponds to open $ Q $-orbits, since they are of the largest dimension. We already know the description of $ L $-orbits on $ \Lambda = G/P_S $ from \S~\ref{section:L-orbits-on-Lagrangian-Grassmannian}. Open $ Q $-orbits are necessarily contained in open $ L $-orbits, hence we concentrate on the open Bruhat cell $ P_S w_0 P_S/P_S \simeq N_{w_0} \simeq \Sym_n(\R) $. $ L $ acts on $ \Sym_n(\R) $ via unimodular action: $ h \cdot z = h z \transpose{h} \;\; (z \in \Sym_n(\R), \; h \in \GL_n(\R) \simeq L) $. The following lemma, Sylvester's law of inertia, is a special case of Lemma~\ref{lemma:L-orbits-on-kth-Bruhat-cell-finiteness-representatives}. \begin{lemma}\label{lemma:open-L-orbits-on-symnR} Let $ L = \GL_n(\R) $ act on $ N_{w_0} = \Sym_n(\R) $ via unimodular action. Then, open orbits are parametrized by the signature $ (p, q) $ with $ p + q = n $. A complete system of representatives are given by $ \{ I_{p,q} \mid p, q \geq 0, \; p+ q = n \} $, where $ I_{p,q} = \diag (1_p, - 1_q) $. \end{lemma} Let us denote open $ L $-orbits by \begin{equation}\label{eq:def-of-Omega-pq} \begin{aligned} \Omega(p,q) &= \{ z \in \Sym_n(\R) \mid \text{$ z $ has signature $ (p, q) $} \} \\ &= \{ h I_{p,q} \transpose{h} \mid h \in \GL_n(\R) \} . \end{aligned} \end{equation} Thus we are looking for open $ Q $-orbits in $ \Omega(p,q) $. Let us denote $ H = \Stab_L(I_{p,q}) $, the stabilizer of $ I_{p,q} \in \Omega(p,q) $, which is isomorphic to an indefinite orthogonal group $ \OO(p,q) $. As a consequence $ \Omega(p,q) \simeq L/H \simeq \GL_n(\R)/ \OO(p,q) $. Since $ \Omega(p,q) \simeq L/H $, \begin{equation*} \Omega(p,q) / Q \simeq H \backslash L/Q \simeq \Xi_d/H , \end{equation*} where $ \Xi_d = \Grass_d(\R^n) $ is the Grassmannian of $ d $-dimensional subspaces. So our problem of seeking $ Q $-orbits in $ \Omega(p,q) $ is equivalent to understand $ H $-orbits in a partial flag variety $ \Xi_d $. Since $ H $ is a symmetric subgroup fixed by an involutive automorphism of $ L $, this problem is ubiquitous in representation theory of real reductive Lie groups. Let us consider a $ d $-dimensional subspace $ U = \spanr{ \eb_1, \eb_2, \dots , \eb_d } \in \Grass_{d}(\R^n) $ which is stabilized by $ Q $. Take $ z \in \Omega(p,q) $, and consider a quadratic form $ \qformQ{z^{-1}}(v, v) = \transpose{v} z^{-1} v \; (v \in \R^n) $ associated to $ z^{-1} $, which also has the same signature $ (p, q) $ as that of $ z $. Note that the restriction of $ \qformQ{z^{-1}} $ to $ U $ can be degenerate, and the rank and the signature of $ \qformQ{z^{-1}} \restrict_U $ is preserved by the action of $ Q $. In fact, for $ u \in U $ and $ m \in Q $, we get \begin{align*} \qformQ{(m \cdot z)^{-1}}(u, u) &= \transpose{u} (m z \transpose{m})^{-1} u = \transpose{u} (\transpose{m}^{-1} z^{-1} m^{-1}) u \\ &= \transpose{(m^{-1} u)} z^{-1} (m^{-1} u) = \qformQ{z^{-1}}(m^{-1} u, m^{-1} u) . \end{align*} Since $ m^{-1} \in Q $ preserves $ U $, the quadratic forms $ \qformQ{z^{-1}} $ and $ \qformQ{(m \cdot z)^{-1}} $ have the same rank and the signature when restricted to $ U $. So they are clearly invariants of a $ Q $-orbit in $ \Omega(p,q) $. Put \begin{equation} \label{eq:definition-Omega-pq-st} \Omega(p, q; s, t) = \{ z \in \Omega(p,q) \mid \text{$ \qformQ{z^{-1}} \restrict_U $ has signature $ (s, t) $} \}, \end{equation} where $ s + t $ is the rank of $ \qformQ{z^{-1}} \restrict_U $. Clearly $ 0 \leq s \leq p, \, 0 \leq t \leq q $ and $ s + t \leq d $ must be satisfied. \begin{lemma}\label{lemma:classification-Q-orbits-Omega-pq} $ Q $-orbits in $ \Omega(p,q) $ are exactly \begin{equation*} \{ \Omega(p, q; s, t) \mid s, t \geq 0, \, t + p \geq d, \, s + q \geq d, \, s + t \leq d \} \end{equation*} given in \eqref{eq:definition-Omega-pq-st}. The orbit $ \Omega(p, q; s, t) $ is open if and only if $ s + t = d = \dim U $, i.e., the quadratic form $ \qformQ{z^{-1}} $ is non-degenerate when restricted to $ U $. \end{lemma} \begin{proof} The restriction $ \qformQ{z^{-1}} \restrict_U $ is a quadratic form, and we denote its signature by $ (s, t) $. The rank of $ \qformQ{z^{-1}} \restrict_U $ is $ s + t $ and $ k = d - (s + t) $ is the dimension of the kernel. Obviously, we must have $ 0 \leq s, t , k \leq d $. Since $ \qformQ{z^{-1}} $ is non-degenerate with signature $ (p, q) $, there exist signature constraints \begin{equation*} s + k \leq p, \quad t + k \leq q. \end{equation*} These conditions are equivalent to the condition given in the lemma. The signature $ (s, t) $ and hence the dimension $ k $ of the kernel is invariant under the action of $ Q $. Conversely, if a $ d $-dimensional subspace $ U_1 $ of the quadratic space $ \R^n $ has the same signature $ (s, t) $ (and hence $ k $), it can be translated into $ U $ by the isometry group $ \OO(p,q) $ by Witt's theorem. This means the signature concretely classifies $ Q $-orbits. \end{proof} This lemma practically classifies open $ L $-orbits on $ X = \Lambda \times \Xi_d $. However, we rewrite it more intrinsically. Firstly, we note that, for $ z \in \Sym_n(\R) $, a Lagrangian subspace $ \lambda \in \Lambda = G/ P_S $ in the open Bruhat cell $ P_S w_0 P_S/P_S \simeq N_{w_0} $ is given by \begin{equation*} \lambda = \{ v = \vectwo{z x}{x} \mid x \in \R^n \} , \end{equation*} and clearly such $ z $ is uniquely determined by $ \lambda $. We denote the Lagrangian subspace by $ \lambda_z $. Also, we denote a $ d $-dimensional subspace in $ \Xi_d = \Grass_d(\R^n) $ by $ \xi $. \begin{theorem}\label{theorem:classification-open-orbits-dfv} Suppose that non-negative integers $ p,q $ and $ s, t $ satisfies \begin{equation} p + q = n, \; s + t = d, \; 0 \leq s \leq p, \; 0 \leq t \leq q. \end{equation} Then an open $ L $-orbit in $ X = \Lambda \times \Xi_d $ is given by \begin{equation*} \calorbit(p, q; s, t) = \{ (\lambda_z, \xi) \in \Lambda \times \Xi_d \mid \sign(z) = (p, q), \sign(\qformQ{z^{-1}} \restrict_\xi) = (s, t) \} . \end{equation*} Every open orbit is of this form. \end{theorem} \section{Relative invariants} Let us consider the vector space $ \Sym_n(\R) \times \Mat_{n,d}(\R) $, on which $ \GL_n(\R) \times \GL_d(\R) $ acts. The action is given explicitly as \begin{equation*} \begin{aligned} (h, m) \cdot (z, y) &= (h z \transpose{h}, h y \transpose{m}) \quad \\ & ((h, m) \in \GL_n(\R) \times \GL_d(\R), \; (z, y) \in \Sym_n(\R) \times \Mat_{n,d}(\R) ). \end{aligned} \end{equation*} Let us put $ \regMat_{n,d}(\R) := \{ y \in \Mat_{n,d}(\R) \mid \rank y = d \} $, the subset of full rank matrices in $ \Mat_{n,d}(\R) $. Then, a map $ \pi : \regMat_{n,d}(\R) \to \Xi_d = \Grass_d(\R^n) $ defined by $ \pi(y) := \spanr{ y_j \mid 1 \leq j \leq d } $ ($ y_j $ denotes the $ j $-th column vector of $ y $) is a quotient map by the action of $ \GL_d(\R) $. Thus we get a diagram: \begin{equation*} \xymatrix @R+10pt @C+10pt @M+5pt @L+3pt { \Sym_n(\R) \times \Mat_{n,d}(\R) \ar@{<-^)}[r]^{\text{open}} & \Sym_n(\R) \times \regMat_{n,d}(\R) \ar[d]^{/\GL_d(\R)} \\ & \Sym_n(\R) \times \Xi_d \ar@{^(->}[r]^(.6){\text{open}} & \Lambda \times \Xi_d } \end{equation*} Comparing to the Grassmannian, the vector space $ \Sym_n(\R) \times \Mat_{n,d}(\R) $ is easier to handle. In particular, we introduce two basic relative invariants $ \psi_1 $ and $ \psi_2 $ on $ (z, y) \in \Sym_n(\R) \times \Mat_{n,d}(\R) $ with respect to the above linear action, \begin{equation*} \psi_1(z, y) = \det z, \qquad \psi_2(z, y) = \det z \cdot \det (\transpose{y} z^{-1} y) . \end{equation*} Note that \begin{equation*} \psi_2(z, y) = (-1)^d \det \begin{pmatrix} z & y \\ \transpose{y} & 0 \end{pmatrix}, \end{equation*} so that it is actually a polynomial. We consider two characters of $ (h, m) \in \GL_n(\R) \times \GL_d(\R) $: \begin{equation}\label{eq:characters-chi1-and-chi2} \chi_1(h,m) = (\det h)^2 , \quad \chi_2(h, m) = (\det h)^2 (\det m)^2 . \end{equation} Then it is easy to check that the relative invariants $ \psi_1, \psi_2 $ are transformed under characters $ \chi_1^{-1}, \chi_2^{-1} $ respectively. Let us define \begin{align*} \widetilde{\Omega} &= \{ (z, y) \in \Sym_n(\R) \times \Mat_{n,d}(\R) \mid \psi_1(z, y) \neq 0, \; \psi_2(z, y) \neq 0 \} , \\ \widetilde{\Omega}(p, q; s, t) &= \{ (z, y) \in \Sym_n(\R) \times \Mat_{n,d}(\R) \mid \sign(z) = (p, q), \; \sign(\transpose{y} z^{-1} y) = (s, t) \} . \end{align*} The set $ \widetilde{\Omega} $ is clearly open and is a union of open $ \GL_n(\R) \times \GL_d(\R) $-orbits in $ \Sym_n(\R) \times \Mat_{n,d}(\R) $. \begin{theorem} The sets $ \widetilde{\Omega}(p, q; s, t) $, where \begin{equation} \label{eq:open-condition-of-pqst} p + q = n, \;\; s + t = d, \;\; 0 \leq s \leq p, \;\; 0 \leq t \leq q , \end{equation} are open $ \GL_n(\R) \times \GL_d(\R) $-orbits, and they exhaust all the open orbits in $ \Sym_n(\R) \times \Mat_{n,d}(\R) $, i.e., \begin{equation*} \widetilde{\Omega} = \coprod\nolimits_{p,q,s,t} \widetilde{\Omega}(p, q; s, t) , \end{equation*} where the union is taken over $ p, q, s,t $ which satisfies \eqref{eq:open-condition-of-pqst}. Moreover, the quotient $ \widetilde{\Omega}(p, q; s, t) /\GL_d(\R) $ is isomorphic to $ \Omega(p, q; s, t) $, an open $ L $-orbit in the double flag variety $ X = \Lambda \times \Xi_d $. \end{theorem} This theorem is just a paraphrase of Theorem~\ref{theorem:classification-open-orbits-dfv}. Since relative invariants are polynomials, we can consider them on the complexified vector space $ \Sym_n(\C) \times \Mat_{n,d}(\C) $. In the rest of this section, we will study them on this complexified vector space, and we denote it simply by $ \Sym_n \times \Mat_{n,d} $ omitting the base field. Similarly, we use $ \GL_n = \GL_n(\C) $, etc., for algebraic groups over $ \C $. Recall the characters $ \chi_1, \chi_2 $ of $ \GL_n \times \GL_d $ in \eqref{eq:characters-chi1-and-chi2}. The following theorem should be well-known to the experts, but we need the proof of it to get further results. \begin{theorem} $ \GL_n \times \GL_d $-module $ \Pol(\Sym_n \times \Mat_{n,d}) $ contains a unique non-zero relative invariant $ f(z, y) $ with character $ \chi_1^{-m_1} \chi_2^{-m_2} \;\; (m_1, m_2 \geq 0) $ up to non-zero scalar multiple. This relative invariant is explicitly given by $ f(z, y) = (\det z)^{m_1 + m_2} (\det (\transpose{y} z^{-1} y))^{m_2} $. \end{theorem} \begin{proof} In this proof, to avoid notational complexity, we consider the dual action \begin{equation*} (h, m) \cdot (z, y) = (\transpose{h}^{-1} z h^{-1}, \transpose{h}^{-1} y m^{-1}) \quad ((z, y) \in \Sym_n \times \Mat_{n,d}, (h, m) \in \GL_n \times \GL_d) . \end{equation*} To translate the results here to the original action is easy. First, we quote results on the structure of the polynomial rings over $ \Sym_n $ and $ \Mat_{n,d} $. Let us denote the irreducible finite dimensional representation of $ \GL_n $ with highest weight $ \lambda $ by $ V^{(n)}(\lambda) $ (if $ n $ is to be well understood, we will simply write it as $ V(\lambda) $). \begin{lemma}\label{lemma:irr-decomp-sym-mat} \begin{thmenumerate} \item As a $ \GL_n $-module, $ \Sym_n $ is multiplicity free, and the irreducible decomposition of the polynomial ring is given by \begin{equation} \Pol(\Sym_n) \simeq \bigoplus_{\mu \in \partition_n} V^{(n)}(2 \mu) . \end{equation} \item Assume that $ n \geq d \geq 1 $. As a $ \GL_n \times \GL_d $-module, $ \Mat_{n,d} $ is also multiplicity free, and the irreducible decomposition of the polynomial ring is given by \begin{equation} \Pol(\Mat_{n,d}) \simeq \bigoplus_{\lambda \in \partition_d} V^{(n)}(\lambda) \otimes V^{(d)}(\lambda) . \end{equation} \end{thmenumerate} \end{lemma} Since we are looking for relative invariants for $ \GL_d $, it must belong to one dimensional representation space $ {\det}_d^{\ell} = V^{(d)}(\ell \varpi_d) $, where $ \varpi_d = (1, \dots, 1, 0, \dots, 0) $ denotes the $ d $-th fundamental weight. Thus it must be contained in the space \begin{equation} \bigl( V^{(n)}(2 \mu) \otimes V^{(n)}(\ell \varpi_d) \bigr) \otimes V^{(d)}(\ell \varpi_d) \subset \Pol(\Sym_n \times \Mat_{n,d}). \end{equation} Since a relative invariant is also contained in the one dimensional representation of $ \GL_n $, say $ {\det}_n^k = V^{(n)}(k \varpi_n) $, $ V^{(n)}(2 \mu) \otimes V^{(n)}(\ell \varpi_d) $ must contain $ V^{(n)}(k \varpi_n) $. We argue \begin{equation*} \begin{aligned} V^{(n)}(2 \mu) \otimes & V^{(n)}(\ell \varpi_d) \supset V^{(n)}(k \varpi_n) \\ & \iff 2 \mu - k \varpi_n = (\ell \varpi_d)^{\ast} = \ell \varpi_{n - d} - \ell \varpi_n \\ & \iff 2 \mu = (k - \ell) \varpi_n + \ell \varpi_{n - d}. \end{aligned} \end{equation*} Thus, $ \ell \geq 0 $ and $ k - \ell \geq 0 $ are both even integers, which completely determine $ \mu $. So the relative invariant is unique (up to a scalar multiple) if we fix the character $ {\det_n}^k {\det_d}^{\ell} = \chi_1^{(k - \ell)/2} \chi_2^{\ell/2} $. \end{proof} \begin{corollary}\label{cor:finite-dim-rep-generated-by-kernel} Let us consider the relative invariant \begin{equation*} f(z, y) = (\det z)^{m_1 + m_2} (\det (\transpose{y} z^{-1} y))^{m_2} \qquad (m_1, \, m_2 \geq 0) \end{equation*} in the above theorem. \begin{thmenumerate} \item The space $ \spanc{ f(z, y) \mid z \in \Sym_n } \subset \Pol(\Mat_{n,d}) $ is stable under $ \GL_n $ and it is isomorphic to $ V^{(n)}(2 m_2 \varpi_d)^{\ast} \otimes V^{(d)}(2 m_2 \varpi_d)^{\ast} $ as a $ \GL_n \times \GL_d $-module. \item Similarly, the space $ \spanc{ f(z, y) \mid y \in \Mat_{n,d} } \subset \Pol(\Sym_n) $ is stable under $ \GL_n $ and it is isomorphic to $ V^{(n)}(2 m_1 \varpi_n + 2 m_2 \varpi_{n - d})^{\ast} $. \end{thmenumerate} \end{corollary} \begin{proof} It is proved that \begin{equation*} f(z, y) \in \bigl( V^{(n)}(2 \mu) \otimes V^{(n)}(\ell \varpi_d) \bigr) \otimes V^{(d)}(\ell \varpi_d) \subset \Pol(\Sym_n \times \Mat_{n,d}) , \end{equation*} where $ k = 2 m_1 + 2 m_2 , \; \ell = 2 m_2 $ and $ \mu = (k - \ell) \varpi_n + \ell \varpi_{n - d} $. For any specialization of $ y $, this space is mapped to $ V^{(n)}(2 \mu) $ (or possibly zero), and if we specialize $ z $ to some symmetric matrix, it is mapped to $ V^{(n)}(\ell \varpi_d) \otimes V^{(d)}(\ell \varpi_d) $. This shows the results. \end{proof} Although, we do not need the following lemma below, it will be helpful to know the explicit formula for $ \det (\transpose{y} z^{-1} y) $. Note that we take $ z $ instead of $ z^{-1} $ in the lemma. \begin{lemma} Let $ [n] = \{ 1, 2, \dots, n \} $ and put $ \binom{[n]}{d} := \{ I \subset [n] \mid \# I = d \} $, the family of subsets in $ [n] $ of $ d $-elements. For $ X \in \Mat_n $ and $ I, J \in \binom{[n]}{d} $, we will denote $ X_{I, J} := ( x_{i, j} )_{i \in I, j \in J} $, a $ d \times d $-submatrix of $ X $. For $ (z, y) \in \Sym_n \times \Mat_{n,d} $, we have \begin{equation*} \det (\transpose{y} z y) = \sum_{I, J \in \binom{[n]}{d}} \det(z_{IJ}) \, \det ((y \transpose{y})_{IJ}) = \sum_{I, J \in \binom{[n]}{d}} \det(z_{IJ}) \, \det (y_{I, [d]}) \, \det (y_{J, [d]}) . \end{equation*} \end{lemma} We observe that $ \{ \det (y_{I, [d]}) \mid I \in \binom{[n]}{d} \} $ is the Pl\"{u}cker coordinates and also $ \{ \det (z_{IJ}) \mid I, J \in \binom{[n]}{d} \} $ is the coordinates for the determinantal variety of rank $ d $ (there are much abundance though). \section{Degenerate principal series representations} Let us return back to the situation over real numbers, and we introduce degenerate principal series for $ G = \Sp_{2n}(\R) $ and $ L = \GL_n(\R) $ respectively. \subsection{Degenerate principal series for $ G/P_S $} Let us recall $ G = \Sp_{2n}(\R) $ and its maximal \psg $ P_S $. Take a character $ \chiPS $ of $ P_S $, and consider a degenerate principal series representation \begin{equation*} \CinfInd_{P_S}^G \chiPS := \{ f : G \to \C : C^{\infty} \mid f(g p) = \chiPS(p)^{-1} f(g) \;\; (g \in G, p \in P_S) \} , \end{equation*} where $ G $ acts by left translations: $ \pi_{\nu}^G(g) f(x) = f(g^{-1} x) $. In the following, we will take \begin{equation}\label{eq:character.chi.PS.nu} \chiPS(p) = |\det a|^{\nu} \qquad \text{ for } \quad p = \mattwo{a}{w}{0}{\transpose{a}^{-1}} \in P_S . \end{equation} (We can multiply the sign $ \sgn(\det a) $ by $ \chiPS $, if we prefer.) Since $ \Sym_n(\R) $ is openly embedded into $ G/P_S $, a function $ f \in \CinfInd_{P_S}^G \chiPS $ is determined by the restriction $ f\restrict_{\Omega} $ where $ \Omega $ is the embedded image of $ \Sym_n(\R) $ in $ G/P_S $. Explicitly, $ \Omega $ is defined by \begin{equation*} \Omega = \left\{ J \mattwo{1}{0}{z}{1} P_S/P_S \mid z \in \Sym_n(\R) \right\} , \qquad J = \mattwo{0}{-1_n}{1_n}{0} = w_0 , \end{equation*} where $ w_0 $ is the longest element in the Weyl group, and we give an open embedding by \begin{equation} \xymatrix @R-30pt @C+10pt @M+5pt @L+3pt { \Sym_n(\R) \ar[r] & \Omega \ar@{^(->}[r]^{\text{open}} & G/P_S \\ z \ar@{|->}[r] & J \mattwo{1}{0}{z}{1} P_S/P_S } \end{equation} In the following we mainly identify $ \Sym_n(\R) $ and $ \Omega $. Let us give the fractional linear action of $ G $ on $ \Sym_n(\R) $ in our setting. \begin{lemma} In the above identification, the linear fractional action $ \lfa{g}{z} $ of $ g = \mattwo{a}{b}{c}{d} \in G $ on $ z \in \Sym_n(\R) = \Omega $ is given by \begin{equation} \lfa{g}{z} = - ( a z - b)(c z - d)^{-1} \in \Sym_n(\R), \end{equation} if $ \det (c z - d) \neq 0 $. \end{lemma} \begin{proof} By the identification, $ w = \lfa{g}{z} $ corresponds to $ g J \mattwo{1}{0}{z}{1} P_S/P_S $. We can calculate it as \begin{align*} g J \mattwo{1}{0}{z}{1} &= J (J^{-1} g J) \mattwo{1}{0}{z}{1} = J \mattwo{d}{-c}{-b}{a} \mattwo{1}{0}{z}{1} \\ &= J \mattwo{d - c z}{-c}{-b + a z}{a} = J \mattwo{1}{0}{w}{1} \mattwo{d - c z}{-c}{0}{u}, \intertext{where} w &= (a z - b)(d - c z)^{-1} \quad \text{ and } \quad u = a + w c = \transpose{(d - c z)}^{-1}. \end{align*} This proves the desired formula. \end{proof} \begin{lemma} For $ f \in \CinfInd_{P_S}^G \chiPS $, the action of $ \pi_{\nu}^G(g) $ on $ f $ is given by \begin{equation*} \pi_{\nu}^G(g) f(z) = |\det(a + z c)|^{- \nu} f(\lfa{g^{-1}}{z}) \qquad (g = \mattwo{a}{b}{c}{d} \in G, \;\; z \in \Sym_n(\R)), \end{equation*} where $ \chiPS(p) $ is given in \eqref{eq:character.chi.PS.nu}. In particular, for $ h = \mattwo{a}{0}{0}{\transpose{a}^{-1}} \in L $, we get \begin{equation*} \pi_{\nu}^G(h) f(z) = |\det(a)|^{- \nu} f(a^{-1} z \transpose{a}^{-1}) . \end{equation*} \end{lemma} We want to discuss the completion of the $ C^{\infty} $-version of the degenerate principal series $ \CinfInd_{P_S}^G \chiPS $ to a representation on a Hilbert space. Usually, this is achieved by the compact picture, but here we use noncompact picture. To do so, we need an elementary decomposition theorem. Here we write $ P_S = L N_S , \;\; L = M A $, where we wrote $ N $ for $ N_S $ which is the unipotent radical of $ P_S $, and $ M = \SL^{\pm}_n(\R) , A = \R_+ $. Further, we denote $ M_K = M \cap K = \OO(n) $. For the opposite Siegel parabolic subgroup $ \conjugate{P_S} $, we denote a Langlands decomposition by $ \conjugate{P_S} = M A \conjugate{N_S} $. Thus we conclude $ \conjugate{N_S} M A N_S \subset K M A N_S = G $ (open embedding). Every $ g \in G $ can be written as $ g = k m a n \in K M A N_S $, and we call this generalized Iwasawa decomposition by abuse of the terminology. Iwasawa decomposition $ g = k m a n $ may not be unique, but if we require $ m a = \mattwo{h}{0}{0}{\transpose{h}^{-1}} $ for an $ h \in \Sym_n^+(\R) $, it is indeed unique. This follows from the facts that the decomposition $ M = \OO(n) \cdot \Sym_n^+(\R) $ is unique (Cartan decomposition), and that $ M_K = K \cap M = \OO(n) $. Now we describe an explicit Iwasawa decomposition of elements in $ \conjugate{N_S} $. \begin{lemma} Let $ v(z) := \mattwo{1}{0}{z}{1} \in \conjugate{N_S} \; (z \in \Sym_n(\R)) $ and denote $ h := \wsqrt{1_n + z^2} \in \Sym_n^+(\R) $, a positive definite symmetric matrix. Then we have the Iwasawa decomposition \begin{align*} v(z) &= k m a n \in K M A N = G , \quad \text{ where } \\ k &= h^{-1} \mattwo{1}{-z}{z}{1} = \mattwo{h^{-1}}{- h^{-1} z}{h^{-1}z}{h^{-1}}, & h &= \wsqrt{1_n + z^2} , \\ m a &= \mattwo{h}{0}{0}{\transpose{h}^{-1}}, & n &= \mattwo{1}{\transpose{h}^{-1} z h^{-1}}{0}{1} \\ a &= \alpha 1_n, \;\; \alpha = (\det(1 + z^2))^{\frac{1}{2n}} & & \end{align*} \end{lemma} \begin{proof} Since \begin{equation*} \mattwo{1}{z}{-z}{1} \mattwo{1}{0}{z}{1} = \mattwo{1 + z^2}{z}{0}{1}, \end{equation*} we get (putting $ h = \wsqrt{1 + z^2} $) \begin{equation*} \wfrac{1}{\wsqrt{1 + z^2}} \mattwo{1}{z}{-z}{1} \mattwo{1}{0}{z}{1} = \mattwo{h}{h^{-1} z}{0}{h^{-1}}, = \mattwo{h}{0}{0}{\transpose{h}^{-1}} \mattwo{1}{\transpose{h}^{-1} z h^{-1}}{0}{1}. \end{equation*} Notice that $ \wfrac{1}{\wsqrt{1 + z^2}} \mattwo{1}{z}{-z}{1} $ is in $ K $, and its inverse is given by $ k $ in the statement of the lemma. The rest of the statements are easy to derive. \end{proof} Since $ \conjugate{N_S} M A N_S $ is open dense in $ G $, $ f \in \CinfInd_{P_S}^G \chiPS $ is determined by $ f \restrict_{\conjugate{N_S}} $. We complete the space of functions on $ \conjugate{N_S} $ or $ \Sym_n(\R) $ by the measure $ (\det(1 + z^2))^{\nu_0 - \frac{n + 1}{2}} dz $, where $ \nu_0 = \Re \nu $ and $ dz $ denotes the usual Lebesgue measure, in order to get a Hilbert representation. See \cite[\S~VII.1]{Knapp.redbook} for details (we use unnormalized induction, so that there is a shift of $ \rho_{P_S}(a) = |\det (\Ad(a)\restrict_{N_S})|^{1/2} = |\det a|^{\frac{n + 1}{2}} $). Thus our Hilbert space is \begin{equation}\label{eq:def-Hilbert-rep-Gnu} \begin{aligned} \HilbGnu &:= \{ f : \Sym_n(\R) \to \C \mid \normGnu{f}^2 < \infty \} , \quad \text{ where } \\ \normGnu{f}^2 &:= \int_{\Sym_n(\R)} |f(z)|^2 (\det(1 + z^2))^{\nu_0 - \frac{n + 1}{2}} dz . \end{aligned} \end{equation} We denote an induced representation $ \Ind_{P_S}^G \chiPS $ on the Hilbert space $ \HilbGnu $ by $ \pi_{\nu}^G $. \begin{remark} The degenerate principal series $ \Ind_{P_S}^G \chiPS $ induced from the character $ \chiPS(p) = |\det a|^{\nu} $ (cf.~Eq.~\eqref{eq:character.chi.PS.nu}) has the unitary axis at $ \nu_0 = \frac{n + 1}{2} $. If $ n $ is even, there exist complementary series for real $ \nu $ which satisfies $ \frac{n}{2} < \nu < \frac{n}{2} + 1 $ (see \cite[Th.~4.3]{Lee.1996}). \end{remark} \subsection{Degenerate principal series for $ L/Q $} \label{subsection:deg-principal-series-LQ} In this subsection, we fix the notations for degenerate principal series of $ L = \GL_{n}(\R) $ from its maximal \psg $ Q = P_{(d, n -d)}^{\GL} $. We will denote \begin{equation}\label{eq:character-chiQ} q = \mattwo{k}{q_{12}}{0}{k'} \in Q, \quad \text{ and } \quad \chiQ(q) = |\det k|^{\mu}. \end{equation} Then $ \chiQ $ is a character of $ Q $, and we consider a degenerate principal series representation \begin{equation*} \CinfInd_Q^L \chiQ := \{ F : L \to \C : C^{\infty} \mid F(a q) = \chiQ(q)^{-1} F(a) \;\; (a \in L, q \in Q) \} , \end{equation*} where $ L $ acts by left translations: $ \pi_{\mu}^L(a) F(Y) = f(a^{-1} Y) \;\; (a, Y \in L) $. We introduce an $L^2$-norm on this space just like usual integral over a maximal compact subgroup $ K_L = K \cap L = \OO(n) $: \begin{equation}\label{eq:L2norm-integral-over-K} \normLmu{F}^2 := \int_{K_L} |F(k)|^2 dk \qquad (F \in \CinfInd_Q^L \chiQ), \end{equation} and take a completion with respect to this norm to get a Hilbert space $ \HilbLmu $. Note that the integration is in fact well-defined on $ K_L/(K \cap Q) \simeq \OO(n)/\OO(d) \times \OO(n - d) $, because of the right equivariance of $ F $. Thus we get a representation $ \pi_{\mu}^L =\Ind_Q^L \chi_Q $ on the Hilbert space $ \HilbLmu $. To make the definition of intertwiners more easy to handle, we unfold the Grassmannian $ L/Q \simeq \Grass_d(\R^n) $. Recall $ \regMat_{n,d}(\R) = \{ y \in \Mat_{n,d}(\R) \mid \rank y = d \} $. Then, we get a map \begin{equation}\label{eq:projection-to-Matnd} \xymatrix @R-30pt @C+10pt @M+5pt @L+3pt { L = \GL_n(\R) \ar[rr] & & \regMat_{n,d}(\R) \\ Y = \mattwo{y_1}{y_3}{y_2}{y_4} \ar@{|->}[rr] & & y = \vectwo{y_1}{y_2} } \end{equation} which induces an isomorphism $ \Xi_d = L/Q \xrightarrow{\;\sim \;} \regMat_{n,d}(\R)/\GL_d(\R) $. Thus we can identify $ \CinfInd_Q^L \chi_Q $ with the space of $ C^{\infty} $ functions $ F : \regMat_{n,d}(\R) \to \C $ with the property $ F(yk) = |\det k|^{-\mu} F(y) $. In this picture, the action of $ L $ is just the left translation: \begin{equation*} \pi_{\mu}^L(a) F(y) = F(a^{-1} y) \qquad (y \in \regMat_{n,d}(\R), \; a \in \GL_n(\R) = L). \end{equation*} To have the $L^2$-norm defined in \eqref{eq:L2norm-integral-over-K}, we restrict the projection map \eqref{eq:projection-to-Matnd} from $ L = \GL_n $ to $ K_L = \OO(n) $, the resulting space being the Stiefel manifold of orthonormal frames $ \Stiefelnd = \{ y \in \regMat_{n,d} \mid \transpose{y} y = 1_d \} $. Then $ L/Q $ is isomorphic to $ \Stiefelnd / \OO(d) $. The norm given in \eqref{eq:L2norm-integral-over-K} is equal to \begin{equation*} \normLmu{F}^2 = \int_{\Stiefelnd} |F(v)|^2 d\sigma(v) , \end{equation*} where $ d\sigma(v) $ is the uniquely determined $ \OO(n) $-invariant non-zero measure. Note that $ \Stiefelnd \simeq \OO(n)/\OO(n - d) $. \begin{remark} The degenerate principal series $ \Ind_Q^L \chiQ $ induced from the character $ \chiQ(q) = |{\det k}|^{\mu} $ (cf.~Eq.~\eqref{eq:character-chiQ}) is never unitary as a representation of $ \GL_n(\R) $. However, if we restrict it to $ \SL_n(\R) $, it has the unitary axis at $ \mu_0 = \frac{n - d}{2} $. In addition, there exist complementary series for real $ \mu $ in the interval of $ \frac{n - d}{2} - 1 < \mu < \frac{n - d}{2} + 1 $ (see \cite[\S~3.5]{Howe.Lee.1999}). \end{remark} \begin{remark} If you prefer the fractional linear action, we should make $ y_1 $ part to $ 1_d $. Thus we get \begin{equation*} a y = \mattwo{a_1}{a_2}{a_3}{a_4} \vectwo{1}{y_2} = \vectwo{a_1 + a_2 y_2}{a_3 + a_4 y_2} = \vectwo{1}{(a_3 + a_4 y_2)(a_1 + a_2 y_2)^{-1}} , \end{equation*} and the fractional linear action is given by \begin{equation*} y_2 \mapsto (a_3 + a_4 y_2)(a_1 + a_2 y_2)^{-1} \qquad ( y_2 \in \Mat_{n - d, d}(\R) ). \end{equation*} \end{remark} \section{Intertwiners between degenerate principal series representations}\label{section:intertwiners} In this section, we consider the following kernel function \begin{equation}\label{eq:kernel.function.K.alpha.beta} \begin{aligned} K^{\alpha, \beta}(z, y) & := |\det(z)|^{\alpha} \, |\det (\transpose{y} z^{-1} y)|^{\beta} = |\det(z)|^{\alpha - \beta} \, \bigl|\det \mattwo{z}{y}{\transpose{y}}{0} \bigr|^{\beta} \\ & \qquad \qquad ((z, y) \in \Sym_n(\R) \times \Mat_{n,d}(\R)), \end{aligned} \end{equation} with complex parameters $ \alpha, \beta \in \C $. Using this kernel, we aim at defining two integral kernel operators $ \IPtoQ $ and $ \IQtoP $, which intertwine degenerate principal series representations. \subsection{Kernel operator $ \IPtoQ $ from $ \pi_{\nu}^G $ to $ \pi_{\mu}^L $} In this subsection, we define an integral kernel operator $ \IPtoQ $ for $ f \in \CinfInd_{P_S}^G \chiPS $ with compact support in $ \Omega(p,q) $: \begin{equation}\label{eq:integral-kernel-operator-IPtoQ} \IPtoQ f(y) = \int_{\Omega(p,q)} f(z) K^{\alpha, \beta}(z, y) d\omega(z) \qquad (y \in \regMat_{n,d}(\R)) , \end{equation} where $ d\omega(z) $ is an $ L $-invariant measure on the open $ L $-orbit $ \Omega(p,q) \subset \Omega $. So the operator $ \IPtoQ $ depends on the parameters $ p $ and $ q $ as well as $ \alpha $ and $ \beta $. For $ h = \mattwo{a}{0}{0}{\transpose{a}^{-1}} \in L $ and $ f $ above, we have \begin{align*} \IPtoQ (\pi_{\nu}^G(h) f) (y) &= \int_{\Omega(p,q)} \chiPS(a)^{-1} f(a^{-1} z \transpose{a}^{-1}) K^{\alpha, \beta}(z, y) d\omega(z) \\ &= \chiPS(a)^{-1} \int_{\Omega(p,q)} f(z) K^{\alpha, \beta}(a z \transpose{a}, y) d\omega(a z \transpose{a}) \\ &= \chiPS(a)^{-1} \int_{\Omega(p,q)} f(z) |\det(a)|^{2 \alpha} K^{\alpha, \beta}(z, a^{-1} y) d\omega(z) \\ &= |\det(a)|^{2 \alpha - \nu} \int_{\Omega(p,q)} f(z) K^{\alpha, \beta}(z, a^{-1} y) d\omega(z) \\ &= |\det(a)|^{2 \alpha - \nu} \pi_{\mu}^L(a) \IPtoQ f(y) . \end{align*} Thus, if $ \nu = 2 \alpha $, we get an intertwiner. In this case, we have $ \IPtoQ f(y k) = |\det(k)|^{2 \beta} \IPtoQ f(y) $ so that \begin{equation*} \IPtoQ f(y) \in \CinfInd_Q^L \chiQ \qquad \text{ for } \quad \chiQ(p) = |\det k|^{- 2\beta} \;\; ( p = \mattwo{k}{\ast}{0}{k'} ), \end{equation*} if it is a $ C^{\infty} $-function on $ L/Q $. To get an intertwiner to $ \pi_{\mu}^L $, we should have $ 2 \beta = - \mu $. As we observed \begin{equation*} \Lambda = G/P_S \supset \bigcup_{p + q = n} \Omega(p,q) \qquad (\text{open}). \end{equation*} For each $ p, q $, the space $ \HilbLmu(p,q) := L^2(\Omega(p,q), (\det(1 + z^2))^{\nu_0 - \frac{n + 1}{2}} dz) $ is a closed subspace of $ \HilbGnu $ and $ L $-stable. From the decomposition of the base spaces, we get a direct sum decomposition of $ L $-modules: \begin{equation*} \HilbGnu = \bigoplus_{p + q = n} \HilbGnu(p, q) \end{equation*} Now we state one of the main theorem in this section. \begin{theorem}\label{thm:conv-integral-operator-P} Let $ \nu_0 := \Re \nu, \; \mu_0 := \Re \mu $ and assume that they satisfy inequalities \begin{equation}\label{thm:P-ineq-conv-radial} n \nu_0 + d \mu_0 > \dfrac{n (n + 1)}{2} , \qquad n \nu_0 - d \mu_0 > \dfrac{n (n + 1)}{2} , \end{equation} and \begin{equation}\label{thm:P-ineq-conv-spherical} \nu_0 + \mu_0 \geq n + 1, \qquad \mu_0 \leq 0. \end{equation} Put $ \alpha = \nu/2, \; \beta = - \mu/2 $. Then the integral kernel operator $ \IPtoQ f $ defined in \eqref{eq:integral-kernel-operator-IPtoQ} converges and gives a bounded linear operator $ \IPtoQ : \HilbGnu(p,q) \to \HilbLmu $ which intertwines $ \pi_{\nu}^G\restrict_L $ to $ \pi_{\mu}^L $. \end{theorem} The rest of this subsection is devoted to prove the theorem above. Mostly we omit $ p, q $ if there is no misunderstandings and we write $ \nu, \mu $ instead of $ \nu_0, \mu_0 $ in the following. Let us evaluate the square of integral $ |\IPtoQ f(y)|^2 $ point wise. The first evaluation is given by Cauchy-Schwartz inequality: \begin{align*} &|\IPtoQ f(y)|^2 \\ &\leq \int_{\Omega} |f(z)|^2 (\det(1 + z^2))^{\nu - \frac{n + 1}{2}} dz \int_{\Omega} |K^{\alpha, \beta}(z, y)|^2 (\det(1 + z^2))^{-(\nu - \frac{n + 1}{2})} |\det z|^{-(n + 1)} dz \\ &\leq \normGnu{f}^2 \int_{\Omega} |K^{\alpha, \beta}(z, y)|^2 (\det(1 + z^2))^{-(\nu - \frac{n + 1}{2})} |\det z|^{-(n + 1)} dz , \end{align*} where $ dz $ is the Lebesgue measure on $ \Sym_n(\R) \simeq \R^{\frac{n (n + 1)}{2}} $, and we use $ d\omega(z) = |\det z|^{-\frac{n + 1}{2}} dz$. Since $ \alpha = \nu/2 $ and $ \beta = - \mu/2 $, the second integral becomes \begin{align} \int_{\Omega} & |K^{\alpha, \beta}(z, y)|^2 (\det(1 + z^2))^{-(\nu - \frac{n + 1}{2})} |\det z|^{-(n + 1)} dz \notag \\ &= \int_{\Omega} |\det z|^{\nu + \mu - (n + 1)} \bigl|\det \mattwo{z}{y}{\transpose{y}}{0} \bigr|^{- \mu} (\det(1 + z^2))^{-(\nu - \frac{n + 1}{2})} dz . \label{eq:square-norm-of-K-for-P} \end{align} To evaluate the last integral, we use polar coordinates for $ z $. Namely, we put $ r := \sqrt{\trace z^2} $ and write $ z = r \Theta $. Then $ \trace (\Theta^2) = 1 $, and $ \Omega^{\Theta}(p,q) = \closure{\Omega(p,q)} \cap \{ \Theta \mid \trace (\Theta^2) = 1 \} $ is compact. Using polar coordinates, we get $ \det z = r^n \det \Theta $ and \begin{equation*} \det \mattwo{z}{y}{\transpose{y}}{0} = \det \mattwo{r \Theta}{y}{\transpose{y}}{0} = r^{-2d} \det \mattwo{r \Theta}{ry}{\transpose{(ry)}}{0} = r^{n-d} \det \mattwo{\Theta}{y}{\transpose{y}}{0} \end{equation*} Also we note that $ dz = r^{\frac{n (n + 1)}{2} - 1} dr d\Theta $. Thus we get \begin{align*} & \int_{\Omega(p,q)} |\det z|^{\nu + \mu - (n + 1)} \bigl|\det \mattwo{z}{y}{\transpose{y}}{0} \bigr|^{- \mu} (\det(1 + z^2))^{-(\nu - \frac{n + 1}{2})} dz \\ &= \int_{\Omega^{\Theta}(p,q)} |\det \Theta|^{\nu + \mu - (n + 1)} \bigl|\det \mattwo{\Theta}{y}{\transpose{y}}{0} \bigr|^{- \mu} d\Theta \\ &\qquad\qquad \times \int_0^{\infty} r^{n(\nu + \mu - (n + 1)) + (n - d)(- \mu)} (\det(1 + r^2 \Theta^2))^{-(\nu - \frac{n + 1}{2})} r^{\frac{n (n + 1)}{2} - 1} dr \\ &= \int_{\Omega^{\Theta}(p,q)} |\det \Theta|^{\nu + \mu - (n + 1)} \bigl|\det \mattwo{\Theta}{y}{\transpose{y}}{0} \bigr|^{- \mu} d\Theta \\ &\qquad\qquad \times \int_0^{\infty} r^{n \nu + d \mu - \frac{n (n + 1)}{2} - 1} (\det(1 + r^2 \Theta^2))^{-(\nu - \frac{n + 1}{2})} dr \end{align*} By the assumption \eqref{thm:P-ineq-conv-spherical}, the integrand in the first integral over $ \Omega^{\Theta}(p,q) $ is continuous, and converges. For the second, we separate it according as $ r \downarrow 0 $ or $ r \uparrow \infty $. If $ r $ is near zero, the factor $ \det(1 + r^2 \Theta^2) $ is approximately $ 1 $, so the integral converges if $ \int_0^1 r^{n \nu + d \mu - \frac{n (n + 1)}{2} - 1} dr $ converges. The first inequality in \eqref{thm:P-ineq-conv-radial} guarantees the convergence. On the other hand, if $ r $ is large, the factor $ \det(1 + r^2 \Theta^2) $ is asymptotically $ r^{2 n} $, so the integral converges if $ \int_1^{\infty} r^{n \nu + d \mu - \frac{n (n + 1)}{2} - 1 - 2 n (\nu - \frac{n + 1}{2})} dr $ converges. We use the second inequality in \eqref{thm:P-ineq-conv-radial} to conclude the convergence. Thus the integral \eqref{eq:square-norm-of-K-for-P} does converge, and the square root of it gives a bound for the operator norm of $ \IPtoQ $. We finished the proof of Theorem~\ref{thm:conv-integral-operator-P}. \subsection{Kernel operator $ \IQtoP $ from $ \pi_{\mu}^L $ to $ \pi_{\nu}^G $} Similarly, we define $ \IQtoP F(z) $, for the moment, for $ F(y) \in \CinfInd_Q^L \chiQ $ by \begin{equation}\label{eq:integral-kernel-operator-IQtoP} \IQtoP F(z) = \int_{\Mat_{n,d}(\R)} F(y) K^{\alpha, \beta}(z, y) dy \qquad (z \in \Sym_n(\R)), \end{equation} where $ dy $ denotes the Lebesgue measure on $ \Mat_{n,d}(\R) $. We will update the definition of $ \IQtoP $ afterwards in \eqref{eq:true-def-Q}, although we will check $ L $-equivariance using this expression. The integral \eqref{eq:integral-kernel-operator-IQtoP} may \emph{diverge}, but at least we can formally calculate as \begin{align*} (\IQtoP \pi_{\mu}^L(a) F)(z) &= \int_{\Mat_{n,d}(\R)} F(a^{-1} y) K^{\alpha, \beta}(z, y) dy \\ &= \int_{\Mat_{n,d}(\R)} F(y) K^{\alpha, \beta}(z, a y) \frac{d a y}{d y} dy \\ &= \int_{\Mat_{n,d}(\R)} F(y) |\det a|^{2 \alpha} K^{\alpha, \beta}(a^{-1} z \transpose{a}^{-1}, y) |\det a|^d dy \\ &= |\det a|^{2 \alpha + d} \chiPS(h) \pi_{\nu}^G(h) \IQtoP F(z) . \end{align*} Thus, if $ \chiPS(h)^{-1} = |\det a|^{2 \alpha + d} $, we get an intertwiner. Here, we need a compatibility for the action, i.e., $ F(y k) = |\det k|^{- \mu} F(y) \;\; (k \in \GL_d(\R)) $ and we get \begin{equation*} F(y k) K^{\alpha, \beta}(z, yk) d(yk) = |\det k|^{- \mu + 2 \beta + n} F(y) K^{\alpha, \beta}(z, y) dy. \end{equation*} From this we can see, if $ \mu = 2 \beta + n $, the integrand (or measure) $ F(y) K^{\alpha, \beta}(z, y) dy $ is defined over $ \regMat_{n, d}(\R)/\GL_d(\R) \simeq \OO(n)/\OO(d) {\times} \OO(n - d) $. This last space is compact. Instead of this full quotient, we use the Stiefel manifold $ \Stiefelnd $ introduced in \S~\ref{subsection:deg-principal-series-LQ} inside $ \Mat_{n, d}(\R) $. Thus, for $ \alpha = - (\nu + d)/2 $ and $ \beta = (\mu - n)/2 $, we redefine the intertwiner $ \IQtoP $ by \begin{equation}\label{eq:true-def-Q} \IQtoP F(z) = \int_{\Stiefelnd} F(y) K^{\alpha, \beta}(z, y) d\sigma(y) \qquad (z \in \Sym_n(\R)), \end{equation} where $ d\sigma(y) $ denotes the $ \OO(n) $-invariant measure on $ \Stiefelnd $. \begin{theorem}\label{theorem:intertwiner-Q-to-P} Let $ \nu_0 := \Re \nu, \; \mu_0 := \Re \mu $ and assume that they satisfy inequalities \begin{equation}\label{thm-eq:Q-ineq-conv-radial} n \nu_0 + d \mu_0 < \dfrac{n (n + 1)}{2} , \qquad n \nu_0 - d \mu_0 < \dfrac{n (n + 1)}{2} , \end{equation} and \begin{equation}\label{thm-eq:Q-ineq-conv-spherical} \nu_0 + \mu_0 \leq n - d, \qquad \mu_0 \geq n. \end{equation} If $ \alpha = - (\nu + d)/2 $ and $ \beta = (\mu - n)/2 $, the integral kernel operator $ \IQtoP $ defined in \eqref{eq:true-def-Q} converges and gives an $ L $-intertwiner $ \IQtoP : \HilbLmu \to \HilbGnu $. \end{theorem} Two remarks are in order. First, the inequalities \eqref{thm-eq:Q-ineq-conv-radial} and \eqref{thm-eq:Q-ineq-conv-spherical} is ``opposite'' to the inequalities in Theorem~\ref{thm:conv-integral-operator-P}. So $ (\nu, \mu) $ does not share a common region for convergence. Second, the condition \eqref{thm-eq:Q-ineq-conv-spherical} in fact \emph{implies} \eqref{thm-eq:Q-ineq-conv-radial}. However, we suspect the inequality \eqref{thm-eq:Q-ineq-conv-spherical} is too strong to ensure the convergence. So we leave them as they are. \medskip Now let us prove the theorem. For brevity, we denote $ \nu_0, \mu_0 $ by $ \nu, \mu $ in the following. Since $ \alpha - \beta = - (\nu + d)/2 - (\mu - n)/2 = \frac{1}{2} (n - d - (\nu + \mu)) \geq 0 $ and $ \beta = (\mu - n)/2 \geq 0 $, the kernel function $ K^{\alpha,\beta}(z, y) $ is continuous. So the integral \eqref{eq:true-def-Q} converges. Let us check $ \IQtoP F(z) \in \HilbGnu $ for $ F \in \HilbLmu $. By Cauchy-Schwarz inequality, we get \begin{align*} |\IQtoP F(z)|^2 \leq \int_{\Stiefelnd} |F(y)|^2 d\sigma(y) \int_{\Stiefelnd} |K^{\alpha, \beta}(z, y)|^2 d\sigma(y) = \normLmu{F}^2 \int_{\Stiefelnd} |K^{\alpha, \beta}(z, y)|^2 d\sigma(y) . \end{align*} Thus \begin{align*} \normGnu{\IQtoP F}^2 &= \int_{\Sym_n(\R)} |\IQtoP F(z)|^2 (\det(1 + z^2))^{\nu - \frac{n + 1}{2}} dz \\ &\leq \normLmu{F}^2 \int_{\Stiefelnd} \int_{\Sym_n(\R)} |K^{\alpha, \beta}(z, y)|^2 (\det(1 + z^2))^{\nu - \frac{n + 1}{2}} dz d\sigma(y) \end{align*} Since $ \alpha = - (\nu + d)/2 $ and $ \beta = (\mu - n)/2 $, the integral of square of the kernel is \begin{align} \int_{\Sym_n(\R)} & |K^{\alpha, \beta}(z, y)|^2 (\det(1 + z^2))^{\nu - \frac{n + 1}{2}} dz \notag \\ &= \int_{\Sym_n(\R)} |\det z|^{-(\nu + \mu) + n - d} \bigl|\det \mattwo{z}{y}{\transpose{y}}{0} \bigr|^{\mu -n} (\det(1 + z^2))^{\nu - \frac{n + 1}{2}} dz . \label{eq:square-norm-of-K-for-Q} \end{align} As in the proof of Theorem~\ref{thm:conv-integral-operator-P}, we use polar coordinate $ z = r \Theta $. Namely, we put $ r := \sqrt{\trace z^2} $ and write $ z = r \Theta $. If we put $ \Omega^{\Theta} = \{ \Theta \in \Sym_n(\R) \mid \trace (\Theta^2) = 1 \} $, it is compact and $ dz = r^{\frac{n (n + 1)}{2} - 1} dr d\Theta $. Thus we get \begin{align*} & \int_{\Sym_n(\R)} |\det z|^{-(\nu + \mu) + n - d} \bigl|\det \mattwo{z}{y}{\transpose{y}}{0} \bigr|^{\mu -n} (\det(1 + z^2))^{\nu - \frac{n + 1}{2}} dz \\ &= \int_{\Omega^{\Theta}} |\det \Theta|^{-(\nu + \mu) + n - d} \bigl|\det \mattwo{\Theta}{y}{\transpose{y}}{0} \bigr|^{\mu - n} d\Theta \\ &\qquad\qquad \times \int_0^{\infty} r^{n (-(\nu + \mu) + n -d) + (n - d)(\mu - n)} (\det(1 + r^2 \Theta^2))^{\nu - \frac{n + 1}{2}} r^{\frac{n (n + 1)}{2} - 1} dr \\ &= \int_{\Omega^{\Theta}} |\det \Theta|^{-(\nu + \mu) + n - d} \bigl|\det \mattwo{\Theta}{y}{\transpose{y}}{0} \bigr|^{\mu - n} d\Theta \\ &\qquad\qquad \times \int_0^{\infty} r^{- (n \nu + d \mu) + \frac{n (n + 1)}{2} - 1} (\det(1 + r^2 \Theta^2))^{\nu - \frac{n + 1}{2}} dr . \end{align*} Since the integrand in the first integral over $ \Omega^{\Theta} $ is continuous and hence converges. For the second, we separate it according as $ r \downarrow 0 $ or $ r \uparrow \infty $ as in the proof of Theorem~\ref{thm:conv-integral-operator-P}. When $ r $ is near zero, the integral converges if $ \int_0^1 r^{- (n \nu + d \mu) + \frac{n (n + 1)}{2} - 1} dr $ converges. The convergence follows from The first inequality in \eqref{thm-eq:Q-ineq-conv-radial}. When $ r $ is large, the integral converges if $ \int_1^{\infty} r^{- (n \nu + d \mu) + \frac{n (n + 1)}{2} - 1 + 2 n (\nu - \frac{n + 1}{2})} dr $ converges. We use the second inequality in \eqref{thm-eq:Q-ineq-conv-radial} for the convergence. This completes the proof of Theorem~\ref{theorem:intertwiner-Q-to-P}. \subsection{Finite dimensional representations} If $ \alpha, \beta \in \Z $, we can naturally consider an algebraic kernel function \begin{equation*} K^{\alpha, \beta}(z, y) = \det(z)^{\alpha} \det (\transpose{y} z^{-1} y)^{\beta} \qquad ((z, y) \in \Sym_n(\R) \times \Mat_{n,d}(\R)) \end{equation*} without taking absolute value. By abuse of notation, we use the same symbol as before. Similarly we also consider algebraic characters \begin{align*} \chiPS(p) = \det(a)^{\nu} \qquad ( p = \mattwo{a}{\ast}{0}{\transpose{a}^{-1}} \in P_S) \intertext{ and } \chiQ(q) = \det (k)^{\mu} \qquad ( q = \mattwo{k}{\ast}{0}{k'} \in Q ) \end{align*} if $ \mu $ and $ \nu $ are integers. In this setting the results in the above subsections are also valid. We make use of Corollary~\ref{cor:finite-dim-rep-generated-by-kernel} to deduce the facts on the image and kernels of integral kernel operators considered above. \begin{theorem}\label{theorem:fin-dim-subrep} For nonnegative integers $ m_1 $ and $ m_2 $, we put $ \alpha = m_1 + m_2, \beta = m_2 $ and define $ K^{\alpha, \beta}(z, y) $ as above. \begin{thmenumerate} \item\label{theorem:fin-dim-subrep:item:Q} Put $ \nu = -2 (m_1 + m_2) - d $ and $ \mu = 2 m_2 + n $, and define the characters $ \chiPS $ and $ \chiQ $ as above. Then $ \Ind_Q^L \chiQ $ contains the finite dimensional representation $ V^{(n)}(2 m_1 \varpi_n + 2 m_2 \varpi_{n - d})^{\ast} $ as an irreducible quotient. On the other hand, the representation $ \Ind_{P_S}^G \chiPS $ contains the same finite dimensional representation of $ L $ as a subrepresentation, and $ \IQtoP $ intertwines these two representations. This subrepresentation is the same for any $ p $ and $ q $. \item\label{theorem:fin-dim-subrep:item:P} Assume $ 2 m_1 \geq n + 1 $ and put $ \nu = 2 (m_1 + m_2) $ and $ \mu = - 2 m_2 $. Define the characters $ \chiPS $ and $ \chiQ $ as above. Then $ \Ind_{P_S}^G \chiPS $ contains the finite dimensional representation $ V^{(n)}(2 m_2 \varpi_d)^{\ast} $ of $ L = \GL_n(\R) $ as an irreducible quotient. On the other hand $ \Ind_Q^L \chiQ $ contains the same finite dimensional representation as a subrepresentation, and $ \IPtoQ $ intertwines these two representations. The intertwiners depend on $ p $ and $ q $, so there are at least $ (n + 1) $ different irreducible quotients which is isomorphic to $ V^{(n)}(2 m_2 \varpi_d)^{\ast} $, while the image in $ \Ind_Q^L \chiQ $ is the same. \end{thmenumerate} \end{theorem} \begin{proof} This follows immediately from Corollary~\ref{cor:finite-dim-rep-generated-by-kernel} and Theorems~\ref{thm:conv-integral-operator-P} and \ref{theorem:intertwiner-Q-to-P}. Note that $ 2 m_1 \geq n + 1 $ is required for the convergence of the integral operator. \end{proof} \newcommand{\FKmbf}{\boldsymbol{a}} \newcommand{\FKm}{\alpha} The above result illustrates how knowledge about the geometry of a double flag variety and associated relative invariants may give information about the structure of parabolically induced representations, and in particular about some branching laws. Let us explain, that the branching laws in the above Theorem are consistent with other approaches to the structure of $ \Ind_{P_S}^G \chiPS$ in Theorem~\ref{theorem:fin-dim-subrep}~\eqref{theorem:fin-dim-subrep:item:Q}. Let us in the following remind about the connection between this induced representation, living on the Shilov boundary $S$ of the Hermitian symmetric space $G/K$, and the structure of holomorphic line bundles on this symmetric space. Let $ \lie{g} = \lie{k} + \lie{p} $ be a Cartan decomposition, and $ \lie{p}_{\C} = \lie{p}^+ \oplus \lie{p}^- $ be a decomposition into irreducible representations of $ K $. For holomorphic polynomials on the symmetric space we have the Schmid decomposition (see \cite{Faraut.Koranyi.1994}, XI.2.4) of the space of polynomials $$\mathcal P(\lie{p}^+) = \oplus_{\FKmbf} \mathcal P_{\FKmbf}(\lie{p}^+)$$ and the sum is over multi-indices ${\FKmbf}$ of integers with $\FKm_1 \geq \FKm_2 \geq \dots \geq \FKm_n \geq 0$, labeling (strictly speaking, here one chooses an order so that these are the negative of) $K$-highest weights $\FKm_1 \gamma_1 + \dots + \FKm_n \gamma_n$ with $\gamma_1, \dots, \gamma_n$ Harish-Chandra strongly orthogonal non-compact roots. Now by restricting polynomials to the Shilov boundary $S$ we obtain an imbedding of the Harish-Chandra module corresponding to holomorphic sections of the line bundle with parameter $\nu$ in the parabolically induced representation on $S$ with the same parameter. For concreteness, recall: For $ f \in \CinfInd_{P_S}^G \chiPS $, the action of $ \pi_{\nu}^G(g) $ on $ f $ is given by \begin{equation*} \pi_{\nu}^G(g) f(z) = |\det(a + z c)|^{- \nu} f(\lfa{g^{-1}}{z}) \qquad (g = \mattwo{a}{b}{c}{d} \in G, \;\; z \in \Sym_n(\R)). \end{equation*} When $\nu$ is an even integer, this is exactly the action in the (trivialized) holomorphic bundle, now valid for holomorphic functions of $z \in \Sym_n(\C)$. So if we can find parameters with a finite-dimensional invariant subspace in this Harish-Chandra module, then the same module will be an invariant subspace in $ \Ind_{P_S}^G \chiPS$. Recall that the maximal compact subgroup $K$ of $G$ has a complexification isomorphic to that of $L$, and the $G/K$ is a Hermitian symmetric space of tube type. Indeed, inside the complexified group $G_{\mathbb C}$ the two complexifications are conjugate. Hence if we consider a finite-dimensional representation of $G$ (or $G_{\mathbb C}$), then the branching law for each of these subgroups will be isomorphic. For Hermitian symmetric spaces of tube type in general also recall the reproducing kernel (as in \cite{Faraut.Koranyi.1994}, especially Theorem~XIII.2.4 and the notation there) for holomorphic sections of line bundles on $G/K$, \begin{equation}\label{eq:reproducing-kernel} h(z,w)^{-\nu} = \sum_{\FKmbf} (\nu)_{\FKmbf} K^{\FKmbf}(z,w) \end{equation} and the sum is again over multi-indices ${\FKmbf}$ of integers with $\FKm_1 \geq \FKm_2 \geq \dots \geq \FKm_n \geq 0$. Here the functions $K^{\FKmbf}(z,w)$ are (suitably normalized) reproducing kernels of the $K$-representations $\mathcal P_{\FKmbf}(\lie{p}^+)$. The Pochhammer symbol is in terms of the scalar symbol in our case here \begin{align*} (\nu)_{\FKmbf} &= (\nu)_{\FKm_1} (\nu - 1/2)_{\FKm_2} \cdots (\nu - (n-1)/2)_{\FKm_n} = \prod_{i = 1}^n (\nu - (i - 1)/2)_{\FKm_i}, \quad \text{ and } \quad \\ (x)_k &= x (x + 1) \cdots (x + k - 1) = \dfrac{\Gamma(x + k)}{\Gamma(x)}. \end{align*} Recall that for positive-definiteness of the above kernel, $\nu$ must belong to the so-called Wallach set; this means that the corresponding Harish-Chandra module is unitary and corresponds to a unitary reproducing-kernel representation of $G$ (or a double covering of $G$). Here the Wallach set is $${\mathcal W} = \{0, \frac{1}{2}, \dots, \frac{n-1}{2} \} \cup (\frac{n-1}{2}, \infty )$$ as in \cite{Faraut.Koranyi.1994}, XIII.2.7. On the other hand, if $\nu$ is a negative integer, the Pochhammer symbols $(\nu)_{\FKmbf}$ vanishes when $ \FKm_1 > -\nu $. So this gives a finite sum in the formula~\eqref{eq:reproducing-kernel} for the reproducing kernel corresponding to a finite-dimensional representation of $G$, and $\FKmbf$ labels the $K$-types occurring here as precisely those with $-\nu \geq \FKm_1$. By taking boundary values we obtain an imbedding of the $K$-finite holomorphic sections on $G/K$ to sections of the line bundle on $G/P_S$. Recalling that for our $G$ the Harish-Chandra strongly orthogonal non-compact roots are $2e_j$ in terms of the usual basis $e_j$, this means that the $L$-types in Theorem~\ref{theorem:fin-dim-subrep}~\eqref{theorem:fin-dim-subrep:item:Q} indeed occur. Namely, we may identify the parameters by the equation \begin{equation*} 2 m_1 \varpi_n + 2 m_2 \varpi_{n - d} = 2(m_1 + m_2, \dots , m_1 + m_2, m_1, \dots , m_1) \end{equation*} with the right-hand side of the form of a multi-index $\FKmbf$ satisfying $- \nu = 2 (m_1 + m_2) + d \geq \FKm_1$ as required above. Thus we have seen, that there is consistency with the results about branching laws from $G$ to $K$ coming from considering finite-dimensional continuations of holomorphic discrete series representations, and on the other hand those branching laws from $G$ to $L$ coming from our study of relative invariants and intertwining operators from $ \Ind_{P_S}^G \chiPS $ to $ \Ind_Q^L \chiQ $. \renewcommand{\MR}[1]{} \def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000 \hskip-1\wd7\penalty 10000\box7} \def\cprime{$'$} \def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
0.002519
Excellence in all we do!. On Wednesday, January 13th – a re-evaluation of health metrics will determine how we will proceed for the following school week, which will begin on Tuesday, January 19th. Please note that Monday, January 18th is Dr. Martin Luther King Jr. Day. All schools are closed that day. Later this week, families will receive additional information about this Remote Learning period from their respective schools. Food Service will be available using our remote learning protocols. Should any students need food assistance during this time, we ask that families email a cafeteria manager with any of our schools. Please place those orders by 9:00 AM each school day. Drive-thru meal pickup will be available at all of our campuses during the upcoming Remote Learning period. Meals can be picked up from 10:30-11:30 AM. The contact info for cafeterias can be found by visiting School Cafeteria Manager. For families that may be struggling with reliable access to the internet, please know that parking lots at all of our schools have been made available as a hotspot for additional support during this time. Also, please notify your student’s teachers if your internet is not reliable so that we can provide learning packets as needed. All Kids Club activities will remain open during this time, unless notified otherwise. We understand that this is a very difficult time for everyone and we appreciate all of the flexibility extended throughout Remote Learning. Please let your student’s teachers know about individual needs or concerns. We will continue to give you updates as information becomes available. We are in this together!
0.917824
June 13, 2019 Buy registered IELTS and TOELF in Australia Author By ieltscertificatesonline Categories Education Visa, Immigration Visa, Student Visa 3. Education, Immigration Share: 3 Replies to “Buy registered IELTS and TOELF in Australia”.
0.173339
Home > Releases > Unemployment in States and Local Areas (all other areas) > Civilian Labor Force in Sitka Borough/city, AK (AKSITK0LFN) Download Data | Add to My Data List | Current Series in FRED Graph: Edit | Print | PDF | Save Related Categories U.S. Regional Data > States > Alaska > Census Areas and Boroughs > Sitka Borough/city, AK
0.017578
\section{Introduction} \label{sec:introduction} Transmitting and processing information in quantum devices has been established in recent years \cite{bruss2019quantum}. Laboratories around the world are in the race to develop increasingly accurate quantum devices. To carry out this successfully, it is necessary to test their operation on classical devices. For this reason, it is important to have efficient classical algorithms to perform quantum simulation \cite{PhysRevLett.88.097904, terhal2002classical}. Several approaches for the efficient computation of quantum time-evolution have been proposed in the literature \cite{kluk1986comparison,SCHOLLWOCK201196,PhysRevA.78.012321,daley2004time,PhysRevLett.93.040502}. The cost of the simulation usually depends on specifics of the system, e.g. the initial state, or on the information that we want to know about the dynamics. For example, the cost of the simulation can be greatly reduced if the amount of entanglement developed by the system remains bounded \cite{daley2004time,PhysRevLett.93.040502}. Less restrictive are the well-known Krylov-subspace methods, constructed to provide approximations to the action of the exponential of a matrix on a vector. In the context of quantum simulation, the mechanics of the approximation is the following: an initial state in a (possibly very) large Hilbert space is first mapped to an effective subspace, the Krylov subspace, that captures the most relevant features of the dynamics. Within this low-dimensional subspace, time evolution is (cheaply) computed. Finally, the evolved state is mapped back to the large Hilbert space. Besides quantum simulation, the method has other important applications like solving systems of ordinary differential equations, large-scale linear systems and more \cite{saad2003iterative,gazzola2020krylov} The core challenge in Krylov-subspace methods is to keep the error limited, in order to achieve precise evolution. For this reason, it is desirable to be able to predict the time regime in which the error will remain less than a given predetermined tolerance. This problem has been approached in several ways in the literature \cite{park1986unitary,saad1992analysis,stewart1996error,hochbruck1997krylov,expokit,moler2003nineteen,Jawecki:2020cc}, and the provided bounds generally overestimate the error (significantly). In the seminal paper \cite{park1986unitary}, Park and Light use the fact that the dynamics in the reduced subspace is that of an effective 1d lattice with a tridiagonal Hamiltonian. An initial state localized at one end starts spreading and the error in the approximation is approximated by the population in the other end of the chain. Later, Saad \cite{saad1992analysis} derived computable estimates of the error using an expansion in the Krylov subspace exploiting the Lanczos algorithm. Other error bounds include involved computations making it difficult to use in an operational way \cite{hochbruck1997krylov}. The goal of this paper is to find tight and computationally inexpensive error bounds for the approximation error in Krylov schemes. We take advantage of a simple observation: the error can be regarded as a Loschmidt echo in which both the forward and backward evolutions are given by tight-binding Hamiltonians. \martin{ In the forward case, the effective dimension is fixed by the Krylov-subspace while the backwards evolution occurs in a space of dimension of the Hilbert space of the system.} This analogy allows us to describe the time-regimes of the error using well-known results in Loschmidt echo theory. In particular, we show that the error remains negligilbe up to some time, related with the effective traveling wave-packet on the trimmed chain (of size given by the size of the Krylov-subspace \cite{park1986unitary}) and its tail "colliding" with the last site. The core of our proposal is that the error in this regime can be ridiculously well captured by a backwards evolution in a chain with only one more site instead of the actual one with dimension of Hilbert space sites. Moreover, we show that one can analytically solve the error for the case in which the tight binding Hamiltonian has homogeneous diagonal and off-diagonal elements. It is shown that this solution works very well in a 1-D spin Ising spin chain with transverse magnetic field. Finally, we give some physical insight explaining why this simple model works in the general case. The paper is organized as follows. In Sec.\ref{sec:Krylov}, we introduce the Krylov-subspace method for quantum time evolution. Next, in Section. \ref{sec:regimes} we describe the different time-regimes of the error, focusing on the analogy with Loschmidt echo dynamics under tight-binding Hamiltonians. In section \ref{sec:numres} we use the connection between the error and the Loschmidt echo of tight-binding Hamiltonians to propose an error bound that describes extremely well the inaccuracy of the approximate evolution in the Krylov subspace. Finally, Sec.\ref{sec:conclu} concludes with some final remarks. Appendix \ref{sec:ap_lanczos} provides a brief description of Lanczos algorithm to obtain an orthonormal basis spanning the Krylov subspace and in Appendix \Ref{sec:ap_ising} we describe the system that we use for the calculations, a 1-D Ising spin chain with transverse magnetic field. In Appendix \ref{sec:analytical}, the error is analytically solved for the simplest case in which the tight-binding Hamiltonian has equal diagonal and non diagonal elements.
0.00293
Heat the vegetable broth until boiling and add bay leaves and pimento along with coarsely grated carrots and celeriac. Add white sausage and cook on m... This is an original recipe for the soup of the Mennonites domiciled in Żuławy region between 16th century until the end of the Second World War. You can enjoy the exceptional taste of the soup thanks to very good meat broth and anise aroma. Make the broth using soup vegetables, beef bones and streaky beef meat (sirloin or brisket). Boil the good broth slowly for two-three hours. 1. Cut the carrots and celery in Julienne’s strips. Blanch tomatoes with very hot water and peel them. Cut the peeled tomatoes into halves and remove the seeds. Next, cut the tomatoes into little cubes. Cut the potatoes into little cubes, as well. And finally, cut the cabbage in 1.5 cm squares. 2. Boil the strained broth. Add the carrots and celery cut into stripes and, next, add the cabbage and potatoes. Boil the soup for about 10 minutes. 3. Add cut tomatoes and anise to the soup and later add the beef meat taken from the broth and cut into little cubes (without fatty parts). Boil it for the following 10 minutes. 4. Apply salt and pepper, as you wish. 5. Pour the soup to the plates and put a dot of thick cream on its top. Finally, sprinkle the soup with the parsley. Recipe comes from pomorskie-prestige.eu The dish you can try at Piano Restaurant Chopin Hotel. Ingredients – a portion for 4 persons: Beef broth 1l White cabbage 150g Potato 100g Carrot 50g Celery 50 g Tomato 100g Anise 2 stars Thick cream 100g Chopped green parsley
0.050731
Narrow Your Search All On Sale On Sale(4)Free Setup(4) Color RushService Minimum Quantity Decoration Recently Viewed You have not viewed any products yet. low as $1.34#696039 low as $0.77#652100 low as $1.09#652083 For orders $250 and above, Applies to item cost only. Applies to academic orders Does not apply to first orders and sample orders, Applies to item cost only.
0.000952
\begin{document} \title{On spin distributions for generic $p$-spin models} \author{Antonio Auffinger} \address[Antonio Auffinger]{Department of Mathematics, Northwestern University} \email{tuca@northwestern.edu} \author{Aukosh Jagannath} \address[Aukosh Jagannath]{Department of Mathematics, Harvard University} \email{aukosh@math.harvard.edu} \date{\today} \begin{abstract} We provide an alternative formula for spin distributions of generic $p$-spin glass models. As a main application of this expression, we write spin statistics as solutions of partial differential equations and we show that the generic $p$-spin models satisfy multiscale Thouless--Anderson--Palmer equations as originally predicted in the work of M\'ezard--Virasoro \cite{MV85}. \end{abstract} \maketitle \section{Introduction} Let $H_N$ be the Hamiltonian for the mixed $p$-spin model on the discrete hypercube $\{+1,-1\}^N$, \begin{equation}\label{eq:ham-def} H_N(\sigma) = \sum_{p\geq 2} \frac{\beta_p}{N^{(p-1)/2}} \sum_{1\leq i_1,\ldots,i_p \leq N} g_{i_1,\ldots,i_p} \sigma_{i_1}\cdots\sigma_{i_p}, \end{equation} where $\{g_{i_1,\ldots,i_p}\}$ are i.i.d. standard Gaussian random variables. Observe that if we let \[\xi(x)=\sum_{p\in \mathbb{N}}\beta_p^{2}x^p,\] then the covariance of $H_N$ satisfies \begin{align*} \mathbb E H_N(\sigma^1)H_{N}(\sigma^2)=N\xi(R_{1,2}), \end{align*} where $R_{\ell,\ell'}:=\frac{1}{N}\sum_{i=1}^N\sigma_i^\ell\sigma_i^{\ell'}$ is the normalized inner-product between $\sigma^{\ell}$ and $\sigma^{\ell'}$, $\ell, \ell' \geq 1$. We let $G_{N}$ to be the Gibbs measure associated to $H_{N}$. In this note, we will be concerned with generic $p$-spin models, that is, those models for which the linear span of the set $\{1\}\cup\{x^p:p\geq2,\, \beta_p\neq0\}$ is dense in $\left(C([-1,1]),\norm{\cdot}_\infty\right)$. Generic $p$-spin models are central objects in the study of mean field spin glasses. They satisfy the Ghirlanda--Guerra identities \cite{PanchGhir10}. As a consequence, if we let $(\sigma^\ell)_{\ell \geq 1}$ be i.i.d. draws from $G_N$, and consider the array of overlaps $(R_{\ell\ell'})_{\ell,\ell'\geq 1}$, then it is known \cite{PanchUlt13} that this array satisfies the ultrametric structure proposed in the physics literature \cite{Mez84}. Moreover, it can be shown (see, e.g., \cite{PanchSKBook}) that the limiting law of $R_{12}$ is given by the Parisi measure, $\zeta$, the unique minimizer of the Parisi formula \cite{AuffChenSC15, TalPF}. In \cite{PanchSGSD13}, a family of invariance principles, called the cavity equations, were introduced for mixed $p$-spin models. It was shown there that if the spin array \begin{equation}\label{eq:spin-array} (\sigma^\ell_i)_{1\leq i\leq N, 1 \leq \ell}. \end{equation} satisfies the cavity equations then they can be uniquely characterized by their overlap distributions. It was also shown that mixed $p$-spin models satisfy these cavity equations modulo a regularizing perturbation that does not affect the free energy. In fact, it can be shown (see \prettyref{prop:cavity-equations} below) by a standard argument that generic models satisfy these equations without perturbations. Consequently, the spin distributions are characterized by $\zeta$ as well by the results of \cite{PanchSGSD13}. Panchenko also showed that the Bolthausen--Sznitman invariance \cite{BoltSznit98} can be utilized to provide a formula for the distribution of spins \cite{PanchSGSD13,PanchSKBook}. The main goal of this note is to present an alternative expression for spin distributions of generic models in terms of a family of branching diffusions. This new way of describing the spin distributions provides expressions for moments of spin statistics as solutions of certain partial differential equations. We show a few examples and applications in \prettyref{sec:Examples}. One of our main applications is that these spin distributions satisfy a multi-scale generalization of the Thouless--Anderson--Palmer (TAP) equations similar to that suggested in \cite{MPV87} and \cite{MV85}. This complements the authors previous work on the Thouless--Anderson--Palmer equations for generic $p$-spin models at finite particle number \cite{AuffJag16}. \subsection{Main results} In this paper we assume that the reader is familiar with the theory of spin distributions. For a textbook introduction, see \cite[Chapter 4]{PanchSKBook}. We include the relevant definitions and constructions in the Appendix for the reader's convenience. The starting point of our analysis is the following observation, which says that the generic $p$-spin models satisfy the cavity equations. These equations are stated in \eqref{eq:cavity-eq}. \begin{prop}\label{prop:cavity-equations} Let $\nu$ be a limit of the spin array \eqref{eq:spin-array} for a generic $p$-spin model. Then $\nu$ satisfies the cavity equations \eqref{eq:cavity-eq} for $r=0$. In particular, $\nu$ is unique. \end{prop} Let $q_{*}>0$ and $U$ be a positive, ultrametric subset of the sphere of radius $\sqrt{q_*}$ in $L^{2}([0,1])$ in the sense that for any $x,y, z\in U$, we have $ (x,y) \geq 0$ and $\| x-z \| \leq \max \{\| x-y \|, \| y-z \| \}$. Define the {driving process on} $U$ to be the Gaussian process, $B_{t}(\sigma)$, indexed by $(t,\sigma)\in[0,q_{*}]\times U$, which is centered, a.s. continuous in time and measurable in space, with covariance \begin{equation}\label{eq:Bromw} \text{Cov}_{B}((t_{1},\sigma^{1}),(t_{2},\sigma^{2}))=(t_{1}\wedge t_{2})\wedge(\sigma^{1},\sigma^{2}). \end{equation} Put concretely, for each fixed $\sigma,$ $B_{t}(\sigma)$ is a Brownian motion and for finitely many $(\sigma^{i})$, $(B_t(\sigma^i))$ is a family of branching Brownian motions whose branching times are given by the inner products between these $\sigma^{i}$. We then define the {cavity field process on $U$} as the solution, $Y_{t}(\sigma)$, of the SDE \begin{equation} \begin{cases} dY_{t}(\sigma)=\sqrt{\xi''(t)}dB_{t}(\sigma)\\ Y_{0}(\sigma)=h. \end{cases}\label{eq:cav-field-proc} \end{equation} Let $\zeta$ be the Parisi measure for the generic $p$-spin model. Let $u$ be the unique weak solution to the Parisi initial value problem on $(0,1)\times\R$ , \begin{equation} \begin{cases} u_{t}+\frac{\xi''(t)}{2}\left(u_{xx}+\zeta([0,t])u_{x}^{2}\right)=0, & \\ u(1,x)=\log\cosh(x). \end{cases}\label{eq:ParisiIVP} \end{equation} For the definition of weak solution in this setting and basic properties of $u$ see \cite{JagTobSC16}. We now define the local field process, $X_{t}(\sigma)$, to be the solution to the SDE \begin{equation} \begin{cases} dX_{t}(\sigma)=\xi''(t)\zeta([0,t])u_{x}(t,X_{t}(\sigma))dt+dY_{t}(\sigma)\\ X_{0}(\sigma)=h. \end{cases}\label{eq:local-field-process} \end{equation} Finally, let the magnetization process be $M_{t}(\sigma)=u_{x}(t,X_{t}(\sigma))$. We will show that the process $X_{q_*}(\sigma)$ is related to a re-arrangement of $Y_{q_*}(\sigma)$. If we view $\sigma$ as a state, then $M_{q_*}(\sigma)$ will be the magnetization of this state. The basic properties of these processes, e.g., existence, measurability, continuity, etc, are studied briefly in \prettyref{app:driving}. We invite the reader to compare their definitions to \cite[Eq. IV.51]{MPV87} and \cite[Eq. 0.20]{BoltSznit98} (see also \cite{ArgAiz09}). We remind the reader here that the support of the asymptotic Gibbs measure for a generic $p$-spin model is positive and ultrametric by Panchenko's ultrametricity theorem and Talagrand's positivity principle \cite{PanchSKBook}, provided we take $q_*=\sup\supp(\zeta)$. Now, for a fixed measurable function $f$ on $L^{2}([0,1])$, write the measure $\mu_{\sigma}^{f}$, on $\{-1,1\}\times\R$ as the measure with density $p(s,y;f)$ given by \[ p(s,y;f)\propto e^{sy}e^{-\frac{(y-f)^2}{2(\xi'(1)-\xi'(q_*))}}. \] Observe that by an application of Girsanov's theorem (see specifically \cite[Lemma 8.3.1]{JagTobPD15}), the measure above is equivalently described as the measure on $\{-1,1\}\times\R$ such that for any bounded measurable $\phi$, \begin{equation} \int\phi \; d\mu_{\sigma}^{f}:=\E\left(\frac{\sum_{s\in\{\pm1\}}\phi(s,X_{1})e_{1}^{X_{1}s}}{2\cosh(X_{1})}\Bigg\vert X_{q_{*}}(\sigma)=f(\sigma)\right).\label{eq:pure-state} \end{equation} \noindent For any bounded measurable $\phi$, we let $\left\langle \phi\right\rangle _{\sigma}^{f}$, denote its expected value with respect to $\mu_\sigma^f$. When it is unambiguous we omit the superscript for the boundary data. For multiple copies, $(s_{i},y_{i})_{i=1}^{\infty}$, drawn from the product $\mu_{\sigma}^{\tensor\infty}$, we also denote the average by $\left\langle \cdot\right\rangle _{\sigma}$. Let $\mu$ be a random measure on $L^{2}([0,1])$ such that the corresponding overlap array satisfies the Ghirlanda-Guerra identities (see \prettyref{app:cavity} for the definition of these identities). Consider the law of the random variables $(S,Y)$ defined through the relation: \begin{equation} \E\left\langle \phi(S,Y)\right\rangle =\E\int\left\langle \phi\right\rangle _{\sigma}^{X}d\mu(\sigma)\label{eq:loc-av-data-x} \end{equation} and the random variables $(S',Y')$ defined through the relation \[ \E\left\langle \phi(S',Y')\right\rangle =\E\int\left\langle \phi\right\rangle _{\sigma}^{Y}\frac{\cosh(Y_{q_{*}}(\sigma))}{\int\cosh(Y_{q_{*}}(\sigma))d\mu(\sigma)}d\mu(\sigma). \] Let $(S_{i},Y_{i})_{i\geq 1}$ be drawn from $\left(\mu_{\sigma}^{X}\right)^{\tensor\infty}$and $(S'_{i},Y_{i}')$ be drawn from $\left(\mu_{\sigma}^{Y}\right)^{\tensor\infty}$ where $\sigma$ is drawn from $\mu$. For i.i.d. draws $(\sigma^{\ell})_{\ell\geq1}$ from $\mu^{\tensor\infty}$, we define $(S_{i}^{\ell},Y_{i}^{\ell})$ and $(S_{i}^{\prime \ell},Y_{i}^{\prime \ell})$ analogously. The main result of this note is the following alternative representation for spins from cavity invariant measures. We let $\cM_{inv}^{\xi}$ denote the space of law of exchangeable arrays with entries in $\{\pm 1\}$ that satisfy the cavity equations and the Ghirlanda-Guerra identities. \begin{thm} \label{thm:spins-equivalent} We have the following. \begin{enumerate} \item For any generic model $\xi$ and any asymptotic Gibbs measure $\mu$, let $(\sigma_{\ell})_{\ell\geq1}$ be i.i.d. draws from $\mu$, let $(S_{i}^{\ell},Y_{i}^{\ell})$ and $(S_{i}^{\prime \ell},Y_{i}^{\prime \ell})$ be defined as above with $\sigma=\sigma^{\ell}$. Then these random variables are equal in distribution. \item For any measure $\nu$ in $\cM_{inv}^{\xi}$, let $(s_{i}^{\ell})$ denote the array of spins and $\mu$ denote its corresponding asymptotic Gibbs measure. Let $(S_{i}^{\ell})$ be defined as above with $\zeta=\E\mu^{\tensor^{2}}(\left(\sigma^{1},\sigma^{2}\right)\in\cdot)$. Then we have \[ (s_{i}^{\ell})\eqdist(S_{i}^{\ell}). \] \end{enumerate} \end{thm} \begin{rem} In \cite{PanchSGSD13,PanchSKBook}, Panchenko obtained first a description of the laws of $(s_{i}^{\ell})$ in a finite replica symmetry breaking regime (i.e., when $\zeta$ consists of finitely many atoms) using Ruelle probability cascades (see \eqref{eq:spins-RPC-formula}). By sending the number of levels of replica symmetry breaking to infinity, he obtains a formula that is valid for any generic $p$-spin \cite[Theorem 4.2]{PanchSKBook}. This is a key step in our proof of Theorem \ref{thm:spins-equivalent}. At finite replica symmetry breaking, the connection to the process $X_t$ can already be seen in \cite[pp 249-250]{BoltSznit98} as a consequence of the Bolthausen-Sznitman invariance principle. \end{rem} Let us now briefly present an application of this result. Let $\nu$ be the spin distribution for a generic model and let $\mu$ be the corresponding asymptotic Gibbs measure. Let $\sigma\in\supp(\mu)$ and fix $q\in[0,q_{*}]$, where $q_*=\sup\supp(\zeta)$. Let \[ B(\sigma,q)=\left\{ \sigma'\in\supp(\mu):(\sigma,\sigma')\geq q\right\} \] be the set of points in the support of $\mu$ that are of overlap at most $q$ with $\sigma$. Recall that by Panchenko's ultrametricity theorem \cite{PanchUlt13}, we may decompose \[ \supp \; \mu =\cup_{\alpha}B(\sigma^{\alpha},q) \] where this union is disjoint. If we call $W_{\alpha}=B(\sigma^{\alpha},q)$, we can then consider the law of $(s,y)$, the spin and the cavity field, but now conditionally on $W_{\alpha}$. That is, let $\left\langle \cdot\right\rangle _{\alpha}$ denote the conditional law $\mu(\cdot\vert W_{\alpha}).$ We then have the following result. \begin{thm}\label{thm:multi-TAP} (Mezard\textendash Virasoro multiscale Thouless\textendash Anderson\textendash Palmer equations) We have that \[ \left\langle s\right\rangle _{\alpha}=u_{x}\bigg(q,\left\langle y\right\rangle _{\alpha}-\int_{q}^{1}\xi''(t)\zeta([0,t])dt\cdot\left\langle s\right\rangle _{\alpha}\bigg) \] where again $u_{x}$ is the first spatial derivative of the Parisi PDE corresponding to $\zeta$. \end{thm} \subsection*{Acknowledgements} The authors would like to thank Louis-Pierre Arguin, G\'erard Ben Arous, Dmitry Panchenko, and Ian Tobasco for helpful discussions. This research was conducted while A.A. was supported by NSF DMS-1597864 and NSF Grant CAREER DMS-1653552 and A.J. was supported by NSF OISE-1604232. \section{Cavity equations for generic models}\label{app:cavity-generic} \subsection{Decomposition and regularity of mixed $p$-spin Hamiltonians}\label{app:decomposition} In this section, we present some basic properties of mixed $p$-spin Hamiltonians. Let $1 \leq n < N$. For $\sigma = (\sigma_{1},\ldots, \sigma_{N}) \in \Sigma_{N}$, $\rho(\sigma) = (\sigma_{n+1}, \ldots, \sigma_{N}) \in \Sigma_{N-n}$, we can write the Hamiltonian $H_{N}$ as \begin{equation}\label{eq:ll} H_{N}(\sigma)=\tilde H_{N}(\sigma)+\sum_{i=1}^{n}\sigma_{i}y_{N,i}(\rho)+r_{N}(\sigma). \end{equation} where the processes $\tilde H_{N}, y_{N,i}$ and $r_{N}$ satisfy the following lemma. \begin{lem} \label{lem:decomposition-lemma} There exist centered Gaussian processes $\tilde H_{N},y_{N},r_{N}$ such that \eqref{eq:ll} holds and \begin{align*} \mathbb{E}\tilde H_{N}(\sigma^{1})\tilde H_{N}(\sigma^{2})= & N\xi\left (\frac{{N-1}}{N}R_{12}\right),\\ \mathbb{E}y^i_{N}(\sigma^{1})y^j_{N}(\sigma^{2})= &\delta_{ij}( \xi'(R_{12})+o_{N}(1)),\\ \mathbb{E}r_{N}(\sigma^{1})r_{N}(\sigma^{2})= & O(N^{-1}). \end{align*} Furthermore, there exist positive constant $C_{1}$ and $C_{2}$ so that with probability at least $1-e^{-C_{1}N},$ \[ \max_{\sigma\in\Sigma_{N-1}}|r_{N}(1,\sigma)-r_{N}(-1,\sigma)|\leq\frac{{C_{2}}}{\sqrt{N}}, \] and a positive constant $C_{3}$ so that \begin{equation}\label{eq:Delta} \mathbb E \exp\bigg(2 \max_{\sigma \in \Sigma_{N-1}} |r_{N}(1,\sigma) - r_{N}(-1,\sigma)|\bigg) \leq C_{3}. \end{equation} \end{lem} \begin{proof} The lemma is a standard computation on Gaussian processes. Let us focus on the case $n=2$. The general case is analagous. Furthermore, to simplify the exposition we will consider the pure $p$-spin model. The mixed case follows by linearity. Here, we set \begin{align*} \tilde H_{N}(\rho(\sigma)) &= N^{-\frac{p-1}{2}} \sum_{2 \leq i_{1},\ldots, i_{p} \leq N} g_{i_{1}\ldots i_{p}} \sigma_{i_{1}}\ldots \sigma_{i_{p}},\\ y_{N}(\rho(\sigma)) &= N^{-\frac{p-1}{2}} \sum_{k=1}^{p}\sum_{\stackrel{2 \leq i_{1},\ldots, i_{p} \leq N}{i_{k}=1}} g_{i_{1}\ldots i_{p}}\sigma_{i_{1}}\ldots \sigma_{i_{p}}, \quad \text{and},\\ r_{N}(\sigma_{1}, \rho(\sigma)) &= N^{-\frac{p-1}{2}} \sum_{l=2}^{p} \sigma_{1}^{\ell} \sum_{2\leq i_{1}, \ldots, i_{p-\ell} \leq N} J_{i_{1}\ldots i_{p-\ell}} \sigma_{i_{1}} \ldots \sigma_{i_{p-\ell}}, \end{align*} where $g_{i_1,\ldots,i_p}$ are as above and $J_{i_{1}\ldots i_{p-\ell}}$ are centered Gaussian random variables with variance equal to $\binom{p}{\ell}$: $J_{i_{1}\ldots i_{p-\ell}}$ is the sum of the $g_{i_{1} \ldots i_{p}}$ where the index $1$ appears exactly $\ell$ times. Computing the variance of these three Gaussian processes give us the the first three statements of the Lemma. For the second to last and last statement, note that for any $\sigma \in \Sigma_{N-1}$, $r(1,\sigma)-r(-1,\sigma)$ is a centered Gaussian process with variance equal to $$ \frac{4}{N^{p-1}} \sum_{\ell = 3, \:\ell \text{ odd}}^{p} \binom{p}{\ell} (N-1)^{p-\ell} \leq \frac{C_{p}}{N^{2}}, $$ for some constant $C_{p}$. A standard application of Borell's inequality and the Sudakov-Fernique's inequality \cite{ATbook} gives us the desired result. \end{proof} We now turn to the proof that generic models satisfy the cavity equations. The argument is fairly standard -- see for example \cite[Chapter 3, Theorem 3.6]{PanchSKBook}. \begin{proof}[\emph{\textbf{Proof of \prettyref{prop:cavity-equations}}}] Fix $n$ sites and a $C_{l}$ as in \eqref{eq:cavity-eq}. By site symmetry, we may assume that these are the last $n$ sites. Our goal is then to show that \begin{equation}\label{eq:hotdog} \E\prod_{l\leq q}\left\langle \prod_{i\in C_{l}}\sigma_{i}\right\rangle =\E\prod_{i\leq q}\frac{\left\langle \prod_{i\in C_{l}}\tanh(g_{\xi',i}(\sigma))\cE_{n}\right\rangle }{\left\langle \cE_{n}\right\rangle }+o_{N}(1). \end{equation} With this observation in hand, note that by Lemma \ref{lem:decomposition-lemma}, the left side of \eqref{eq:hotdog} is equivalent to \[ \E\prod_{l\leq q}\frac{\left\langle \prod_{i\in C_{l}}\tanh(y_{N,i}(\sigma))\cE_{n,0}\right\rangle _{G'}}{\left\langle \cE_{n,0}\right\rangle _{G'}^{q}}, \] where $G'$ is the Gibbs measure for $\tilde H_{N}$ on $\Sigma_{N-n}$. By a localization and Stone-Weierstrass argument, we see that it suffices to show that \[ \E\prod_{l\leq q}\left\langle \prod_{i\in C_{l}}\tanh(y_{i}(\sigma))\cE_{n}\right\rangle _{G'}\left\langle \cE_{n}\right\rangle _{G'}^{k}=\E\prod_{l\leq q}\left\langle \prod_{i\in C_{l}}\tanh(g_{\xi'i}(\sigma))\cE_{n}\right\rangle _{G}\left\langle \cE_{n}\right\rangle _{G}^{k}+o_{N}(1). \] Evidently, this will follow provided the limiting overlap distribution for $\E G'^{\tensor\infty}$ and $\E G^{\tensor\infty}$ are the same. As generic models are known to have a unique limiting overlap distribution (by Lemma 3.6 of \cite{PanchSKBook}), it suffices to show that in fact the overlap distribution of law of $H'_{N}$ and $H_{N-n}$ are the same. Observe that \begin{align*} \abs{\text{Cov}_{H'}(\sigma^{1},\sigma^{2})-\text{Cov}_{H}(\sigma^{1},\sigma^{2})} =N\abs{\xi\left(\frac{N}{N+n}R_{12}\right)-\xi(R_{12})} \leq C(\xi,n), \end{align*} uniformly for $\sigma^{1},\sigma^{2}\in\Sigma_{N-n}$, so that by a standard interpolation argument (see, e.g., \cite[Theorem 3.6]{PanchSKBook}) we have that the free energy of these two systems is the same in the limit $N\to\infty$. An explicit differentiation argument (see \cite[Theorem 3.7]{PanchSKBook}) combined with \cite[Theorem 2.13]{PanchSKBook} shows that the overlap distributions are the same. \end{proof} \section{Proofs of representation formulas}\label{sec:proofs-at-infinite} We now turn to the proofs of the results at infinite particle number. Before we can state these results we need to recall certain basic results of Panchenko from the theory of spin distributions \cite{PanchSGSD13,PanchSKBook}. The notation here follows \cite[Chapter 4]{PanchSKBook} (alternatively, see the Appendix below). \subsection{Preliminaries} We begin with the observation that if we apply the cavity equations, \eqref{eq:cavity-eq} with $n=m$ and $r=0$, we get that, \[ \E\prod_{l\leq q}\prod_{i\in C_{l}}s_{i}^{l}=\E\frac{\prod_{l\leq q}\E'\prod_{i\in C_{l}}\tanh(G_{\xi',i}(\bar{\sigma}))\prod_{i\leq n}\cosh(G_{\xi',i}(\bar{\sigma}))}{\left(\E'\prod_{i\leq n}\cosh(G_{\xi',i})\right)^{q}}. \] Note that the righthand side is a function of only the overlap distribution of $\bar{\sigma}$ corresponding to $\nu$. Let the law of $R_{12}$ be denoted by $\zeta$. Suppose that $\zeta$ consists of $r+1$ atom\emph{s. }Then, since $\mu$ satisfies the Ghirlanda-Guerra identities by assumption, we know that this can also be written as \begin{equation} \E\prod_{l\leq q}\prod_{i\in C_{l}}s_{i}^{l}=\E\frac{\prod_{l\leq q}\sum_{\alpha}w_{\alpha}\prod_{i\in C_{l}}\tanh(g_{\xi',i}(h_{\alpha}))\prod_{i\leq n}\cosh(g_{\xi',i}(\bar{\sigma}))}{\left(\sum w_{\alpha}\prod_{i\leq n}\cosh(g_{\xi',i}(\bar{\sigma}))\right)^{q}}.\label{eq:spins-RPC-formula} \end{equation} Here, $(w_{\alpha})_{\alpha\in\partial\cA_{r}}$ are the weights corresponding to a $RPC(\zeta)$ and $\{h_{\alpha}\}_{\alpha\in\cA_{r}}$ are the corresponding vectors, with $\cA_{r}=\mathbb N^{r}$ viewed as a tree with $r$ levels. For a vertex $\alpha$ of a tree, we denote by $\abs{\alpha}$ the depth of $\alpha$, that is its (edge or vertex) distance from the root. We denote by $p(\alpha)$ to be the set of vertices in the path from the root to $\alpha$. For two vertices $\alpha,\beta$, we let $\alpha\wedge\beta$ denote their least common ancestor, and we say that $\alpha\precsim\beta$ if $\alpha\in p(\beta)$. In particular $\alpha\precsim\alpha$. We say that $\alpha\nsim\beta$ if neither $\alpha\precsim\beta$ nor $\beta\precsim\alpha$. In this setting, it is well known that $g_{\xi'}(h_{\alpha})$ has the following explicit version. Let $(\eta_{\alpha})_{\alpha\in\cA_{r}}$ be i.i.d. gaussians, then \[ g_{\xi'}(h_{\alpha})=\sum_{\beta\precsim\alpha}\eta_{\beta}\left(\xi'(q_{\abs{\beta}})-\xi'(q_{\abs{\beta}-1})\right)^{1/2}. \] It was then showed by Panchenko that the above also has the following representation in terms of ``tilted'' variables $\eta'$ as follows. We define the following family of functions $Z_{p}:\R^{p}\to\R$ with $0\leq p\leq r$ recursively as follows. Let \[ Z_{r}(x)=\log\cosh(\sum_{i=1}^{r}x_{i}(\xi'(q_{i})-\xi'(q_{i-1}))^{1/2}) \] and let \begin{equation} Z_{p}(x)=\frac{1}{\zeta([0,q_{p}])}\log\int\exp\left(\zeta([0,q_{p}])\cdot Z_{p+1}(x,z)\right)d\gamma(z)\label{eq:Z-def} \end{equation} where $d\gamma$ is the standard gaussian measure on $\R$. We then define the transition kernels \[ K_{p}(x,dx_{p+1})=\exp\left(\zeta([0,q_{p}])\left(Z_{p+1}(x,x_{p+1})-Z_{p}(x,x_{p+1})\right)\right)d\gamma(x_{p+1}). \] Also, we define $\eta'_{\alpha}$ as as the random variable with law $K_{\abs{\alpha}}((\eta_{\beta})_{\beta\precsim\alpha},\cdot)$. Finally, define \[ g'_{\xi'}(h_{\alpha})=\sum_{\beta\precsim\alpha}\eta'_{\beta}\left(\xi'(q_{\abs{\beta}})-\xi'(q_{\abs{\beta-1}})\right)^{1/2}. \] Define $g_{\xi',i}'$ analogously. We then have the following proposition. \begin{prop} [Panchenko \cite{PanchSGSD13}] Let $w_{\alpha}$ be as above and let \[ w_{\alpha}'=\frac{w_{\alpha}\prod_{i\leq n}\cosh(g_{\xi',i}(h_{\alpha}))}{\sum w_{\alpha}\prod_{i\leq n}\cosh(g_{\xi',i}(h_{\alpha}))} \] Then we have \[ \left((w_{\alpha}',g_{\xi',i}(h_{\alpha}))\right)_{\alpha}\eqdist\left((w_{\alpha},g'_{\xi',i}(h_{\alpha}))\right)_{\alpha}. \] \end{prop} If we apply this proposition to \eqref{eq:spins-RPC-formula}, we have that \[ \E\prod_{l\leq q}\prod_{i\in C_{l}}s_{i}^{l}=\E\prod_{l\leq q}\sum_{\alpha}w_{\alpha}\prod_{i\in C_{l}}\tanh(g'_{\xi',i}(h_{\alpha})). \] \subsection{Proof of \prettyref{thm:spins-equivalent}.} We now turn to proving \prettyref{thm:spins-equivalent}. We begin with the following two lemmas. \begin{lem} \label{lem:equiv-dist} Let $h_{\alpha}$,\, $\eta_{\alpha}$, $g_{\xi},g_{\xi}^{'}$ be as above. We then have the following equalities in distribution \begin{align*} \left(g(h_{\alpha})\right)_{\alpha} & \eqdist\left(B_{q_{*}}(h_{\alpha})\right)_{\alpha}\\ \left(g_{\xi'}(h_{\alpha})\right)_{\alpha} & \eqdist\left(Y_{q_{*}}(h_{\alpha})\right)_{\alpha}\\ \left(g_{\xi'}'(h_{\alpha})\right)_{\alpha} & \eqdist\left(X_{q_{*}}(h_{\alpha})\right)_{\alpha}. \end{align*} \end{lem} \begin{proof} Observe by the independent increments property of Brownian motion, we have that \[ (\eta_{\alpha})\eqdist(B(q_{\abs{\alpha}},h_{\alpha})-B(q_{\abs{\alpha}-1},h_{\alpha})). \] This yields the first two equalities. It remains to see the last equality. To this end, fix $h_{\alpha}$, and consider the process $X_{t}$ thats solves the SDE \prettyref{eq:local-field-process}. Then if $Y_{t}$ is distributed like $Y$ as above with respect to some measure $Q$ , then by Girsanov's theorem \cite[Lemma 8.3.1]{JagTobPD15}, we have that with respect to the measure $P$ with Radon-Nikodym derivative \[ \frac{dP}{dQ}(t)=e^{\int_{0}^{t}\zeta\left([0,s]\right)du_{s}}, \] the process $Y_{t}$ has the same law as $X_{t}$. In particular, for the finite collection of times $q_{i}$ we have that \begin{align*} \E_{P}F(X_{q_{0}}\ldots,X_{q_{r}}) & =\int F(Y_{q_{0}},\ldots,Y_{q_{r}})e^{\int_{0}^{t}\zeta du(s,Y_{s})}dQ(Y)\\ & =\int F(Y_{t_{1}},\ldots,Y_{t_{k}})\prod_{i=0}^{k}e^{\zeta\left([0,q_{k}]\right)\left(u(q_{k},Y_{q_{k}})-u(q_{k-1},Y_{q_{k-1}})\right)}dQ. \end{align*} By recognizing the law of $(B_{q_{k}})$ and $(Y_{q_{k}})$ as Gaussian random variables, and \prettyref{eq:Z-def} as the Cole-Hopf solution of the Parisi IVP \prettyref{eq:ParisiIVP}, $u(q_{k},x)=Z_{k}(x)$, the result follows. \end{proof} We now need the following continuity theorem. This is intimately related to continuity results commonly used in the literature, though the method of proof is different. Let $\mathcal{Q}_{d}$ denote the set of $d\times d$ matrices of the form \[ \mathcal{Q}_{d}=\{(q_{ij})_{i,j\in[d]}:\:q_{ij}\in[0,1],\:q_{ij}=q_{jk},\:q_{ij}\geq q_{ik}\wedge q_{kj}\:\forall i,j,k\}. \] Note that this set is a compact subset of $\R^{d^{2}}$. Consider the space $\Pr([0,1])$ equipped with the weak-{*} topology. Then the product space $\Pr([0,1])\times\mathcal{Q}_{d}$ is compact Polish. For any $Q\in\cQ_{d}$, let $(\sigma^{i}(Q))_{i=1}^{d}\subset\cH$ be a collection of vectors whose gram-matrix is $Q$. We can then define the functional \[ \mathcal{R}(\zeta,Q)=\E\prod_{i=1}^{d}u_{x}(q_{*},X_{q_{*}}(\sigma^{i})). \] \begin{lem} \label{lem:R-continuous}We have that $\mathcal{R}$ is well-defined and is jointly continuous. \end{lem} \begin{proof} Let $(\sigma^{i})_{i=1}^{d}$ be any collection with overlap matrix $Q$. Recall the infinitesimal generator, $L^{lf}$, of the collection $\left(X_{t}(\sigma^{i})\right)$ from \prettyref{eq:inf-gen-loc-field}. Observe that $L^{lf}$ depends on $(\sigma^{i})$ only through their overlap matrix, which is $Q$. Thus the law is determined by this matrix and $\cR$ is well-defined. We now turn to proving continuity. As $\Pr([0,1])\times\mathcal{Q}$ is compact Polish, it suffices to show that for $\zeta_{r}\to\zeta$ and $Q^{r}=(q_{ij}^{r})$ with $q_{ij}^{r}\to q_{ij}$, $1\leq i,j\leq l$, \[ \mathcal{R}(\zeta_{r},Q^{r})\to\mathcal{R}(\zeta,Q), \] as $r\to\infty.$ Let $a_{ij}^{r}$ and $b_{i}^{r}$ be the coefficients of the diffusion associated to the local field process $X^{\zeta_{r},Q^{r}}$. By \prettyref{eq:inf-gen-loc-field}, we have \[ a_{ij}^{r}(t)=\mathbf{1}_{\{t\leq q_{ij}^{r}\}},\quad b_{i}^{r}(t,\cdot)=\xi''\zeta_{r}u_{x}^{r}(t,\cdot), \] where $u^{r}$ is the solution to the Parisi initial value problem corresponding to $\zeta_{r}$. These coefficients are all uniformly bounded, measurable in time and smooth in space. Furthermore, $\xi$ is continuous, so that \begin{align} \int_{0}^{t}\big(|a_{ij}^{r}(s)-a_{ij}(s)|+\sup_{x}&|b^{r}(s,x)-b(s,x)|\big)ds\nonumber \\ & \leq|q_{ij}^{r}-q_{ij}|+\int_{0}^{t}\sup_{x}|\zeta_{r}\left([0,s]\right)u_{x}^{_{r}}(t,x)-\zeta\left([0,s]\right)u_{x}(t,x)|ds\to0\label{eq:SVestimate} \end{align} as $r\to\infty$ since $u_{x}^{r}$ converges uniformly to $u_{x}$ by \cite[Prop. 1]{AuffChenPM15} as $\zeta_{r}\to\zeta$. By Stroock-Varadhan's theorem \cite[Theorem 11.1.4]{stroock1979multidimensional}, the convergence from \prettyref{eq:SVestimate} implies that the laws of the solutions to the corresponding martingale problems converge. As $(x_{1},\ldots,x_{d})\mapsto\prod_{i=1}^{d}\tanh(x_{i})$ is a continuous bounded function we obtain the continuity of $F$. \end{proof} We may now turn to the proof of the main theorem of this section. \begin{proof}[\textbf{\emph{Proof of \prettyref{thm:spins-equivalent}}}] Suppose first that $\zeta$ consists of $r+1$ atoms. In this setting the result has already been proved by the aforementioned results of Panchenko combined with \prettyref{lem:equiv-dist}. The main task is to prove these results for general $\zeta$. To this end, let $\zeta_{r}\to\zeta$ be atomic. Denote the spins corresponding to these measures by $s_{i,r}^{l}$. Correspondingly, for any collection of moments we have \[ \E\prod_{l\leq q}\prod_{i\in C_{l}}s_{i,r}^{l}=\E\left\langle \cR(\zeta_{r},Q)\right\rangle . \] Recall that the overlap distribution converges in law when $\zeta_{r}\to\zeta$, thus by \prettyref{lem:R-continuous} and a standard argument, \[ \E\left\langle \cR(\zeta_{r},Q)\right\rangle _{r}\to\E\left\langle \cR(\zeta,Q)\right\rangle =\E\left\langle \prod_{i\leq q}\prod_{i\in C_{l}}\tanh(X_{q_{*}}^{i}(\sigma^{l}))\right\rangle . \] However, as the overlap distribution determines the spin distribution, we see that \[ \E\prod_{l\leq q}\prod_{i\in C_{l}}s_{i,r}^{l}\to\E\prod_{l\leq q}\prod_{i\in C_{l}}s_{i}^{l}=\E\frac{\left\langle \prod_{l\leq q}\prod_{i\in C_{l}}\tanh(Y_{q_{*}}^{i}(\sigma^{l}))\prod_{i\leq n}\cosh(Y_{q_{*}}^{i}(\sigma^{l}))\right\rangle }{\left\langle \prod_{i\leq n}\cosh(Y_{q_{*}}^{i}(\sigma^{l}))\right\rangle ^{q}}. \] This yields both results. \end{proof} \section{Proof of Theorem \ref{thm:multi-TAP}} We now prove that the TAP equation holds at infinite particle number. Before stating this proof we point out two well-known \cite{AuffChenSC15,JagTobPD15} but useful facts: the magnetization process $u_x(s,X_s(\sigma))$ is a martingale for fixed $\sigma$ and $u_x(t,x)=\tanh(x)$ for $t\geq q_*=sup\text{ supp } \zeta$. \begin{proof}[\textbf{\emph{Proof of \prettyref{thm:multi-TAP}}}] Consider $\left\langle s\right\rangle _{\alpha}$, if we compute the joint moments of this expectation \[ \left\langle s\right\rangle _{\alpha}^{k}=u_{x}(q,X_{q}^{\sigma})^{k} \] for any $\sigma\in W_{\alpha}$. In fact, jointly, \[ \prod_{\alpha\in A}\left\langle s\right\rangle _{\alpha}^{k_{\alpha}}=\prod_{\alpha\in A}u_{x}(q,X_{q}^{\sigma^{\alpha}})^{k_{\alpha}} \] for $\abs{A}<\infty$. Thus in law, \begin{equation}\label{eq:hamburger} \left\langle s\right\rangle _{\alpha}=u_{x}(q,X_{q}^{\sigma^{\alpha}}). \end{equation} By a similar argument \[ \left\langle y\right\rangle _{\alpha}=\E\left(X_{1}^{\sigma^{\alpha}}\vert\cF_{q}\right), \] where $\cF_{q}$ is the sigma algebra of $\sigma((B_{q}^{\alpha}(\sigma))_{\sigma\in\supp \mu}).$ However, \begin{align*} \E\left(X_{1}^{\sigma^{\alpha}}\vert \cF_{q}\right) & =X_{q}^{\sigma^{\alpha}}+\int_{q}^{1}\xi''(s)\zeta([0,s])\E\left( u_{x}(s,X_{s}^{\sigma^{\alpha}})\vert\cF_{q}\right)ds\\ & =X_{q}^{\sigma^{\alpha}}+\int_{q}^{1}\xi''(s)\zeta([0,s])ds\cdot u_{x}(q,X_{q}^{\sigma^{\alpha}})\\ & =X_{q}^{\sigma^{\alpha}}+\int_{q}^{1}\xi''(s)\zeta([0,s])ds\cdot\left\langle s\right\rangle _{\alpha} \end{align*} where the first line is by definition, \eqref{eq:local-field-process}, of $X^\sigma$, and the second line follows from the martingale property of the magnetization process. Solving this for $X_{q}^{\sigma^{\alpha}}$ yields, \[ X_{q}^{\sigma^{\alpha}}=\gibbs{y}_\alpha - \int_{q}^{1}\xi''(s)\zeta([0,s])ds\cdot\left\langle s\right\rangle _{\alpha}. \] Combining this with \eqref{eq:hamburger}, yields the result. \end{proof} \section{Evaluation of spin statistics\label{sec:Examples}} Using spin distributions, one can obtain formulae for expectations of products of spins, either through the directing function $\sigma$ or by taking limits of expressions using Ruelle cascades. The goal of this section is to explain how one can obtain expressions for such statistics as the solutions of certain partial differential equations. The input required will be the overlap distribution $\zeta(t)$. In particular, one can in principle evaluate these expression using standard methods from PDEs or numerically. Rather than developing a complete calculus of spin statistics, we aim to give a few illustrative examples. At the heart of these calculations is the following key observation: the magnetization process for any finite collection $(\sigma^{i})_{i=1}^{n}$ is a family of branching martingales whose independence properties mimics that of the tree encoding of their overlap arrays. (This can be formalized using the language of Branchingales. See \cite{Aus15} for more on this.) In this section we focus on two examples: two spin statistics, i.e., the overlap, and three spin statistics. One can of course write out general formulas, however, we believe that these two cases highlight the key ideas. In particular, the second case is the main example in \cite{MV85}, where this is calculated using replica theory. The reader is encouraged to compare the PDE and martingale based discussion here with the notion of ``tree operators'' in that paper. For the remainder of this subsection, all state measures should be taken with boundary data $f(\sigma)=X_{q_{*}}(\sigma)$. \subsection{Two Spin Statistics} We first aim to study two spin statistics. As the spins take values $\pm1$, there is only one nontrivial two spin statistic, namely $\E s_{1}^{1}s_{1}^{2}$ where the subscript denotes the site index and the superscript denotes the replica index. Observe that by \prettyref{eq:loc-av-data-x}, we have that \begin{align*} \E s_{1}^{1}s_{1}^{2} & =\E\int\left\langle s\right\rangle _{\sigma^{1}}\cdot\left\langle s\right\rangle _{\sigma^{2}}d\mu^{\tensor2} =\E\int\E\prod_{i=1}^{2}u_{x}(q_{*},X_{q_{*}}^{\sigma^{i}})d\mu^{\tensor2}. \end{align*} Observe that it suffices to compute $\E u_{x}(q_{*},X_{q_{*}}^{\sigma^{1}})u_{x}(q_{*},X_{q_{*}}^{\sigma^{2}}).$ There are a few natural ways to compute this. Let $q_{12}=(\sigma^{1},\sigma^{2})$. One method is to observe that if $\Phi=\Phi_{q_{12}}$ solves \[ \begin{cases} (\partial_{t}+L_{t}^{lf})\Phi=0 & [0,1]\times\R^{2}\\ \Phi(1,x,y)=\tanh(x)\tanh(y) \end{cases}, \] where $L^{lf}$ is the infinitesimal generator for the local field process (see \prettyref{eq:inf-gen-loc-field}), then \[ \E u_{x}(q_{*},X_{q_{*}}^{\sigma^{1}})u_{x}(q_{*},X_{q_{*}}^{\sigma^{2}})=\Phi_{q_{12}}(0,h). \] One can study this problem using PDE methods or Ito's lemma. This yields the expression \[ \E s_{1}^{1}s_{1}^{2}=\int\Phi_{s}(0,h)d\zeta(s). \] Alternatively, note that, by the branching martingale property of the magnetization process, we have that \[ \E u_{x}(q_{*},X_{q_{*}}^{\sigma^{1}})u_{x}(q_{*},X_{q_{*}}^{\sigma^{2}})=\E u_{x}(q_{12},X_{q_{12}})^{2}, \] yielding the alternative expression \[ \E s_{1}^{1}s_{1}^{2}=\int\E u_{x}^{2}(s,X_{s})d\zeta(s). \] In the case that $\zeta$ is the Parisi measure for $\xi$ (for this notation see \cite{JagTobPD15,AuffChenPM15}), it is well-known that on the support of $\zeta$, \[ \E u_{x}^{2}(s,X_{s})=s \] so that \[ \E s_{1}^{1}s_{1}^{2}=\int sd\zeta(s). \] This resolves a question from \cite[Remark 5.5]{BoltSznit98}. \subsection{Three spin statistics.} We now turn to computing more complicated statistics. We focus on the case of the three spin statistic, $\E s_{1}^{1}s_{1}^{2}s_{1}^{3},$ as we believe this to be illustrative of the essential ideas and it is the main example give in the paper of M\'ezard-Virasoro \cite{MV85}. We say a function $f:[0,1]^{k}\to\R$ is \emph{symmetric} if for every $\pi\in S_{k}$, \[ f(x_{\pi(1)},\ldots,x_{\pi(k)})=f(x_{1},\ldots,x_{k}) \] In the following, we denote by $dQ(R^{n})$ the law of the overlap array $R^{n}=(R_{ij})_{ij\in[n]}$. We say that such a function has \emph{vanishing diagonal} if $f(x,\ldots,x)=0$. We will always assume that $Q$ satisfies the Ghirlanda-Guerra identities. Our goal is to prove the following: \begin{thm} \label{thm:three-spin-calc}We have that \begin{align*} \E s_{1}^{1}s_{1}^{2}s_{1}^{3} & =\frac{3}{4}\int\int\E u_{x}(b\vee a,X_{b\vee a})^{2}u_{x}(a\wedge a,X_{a\wedge a})d\zeta(a)d\zeta(b). \end{align*} \end{thm} As a starting point, again observe that from the properties of state measures, \prettyref{eq:pure-state}, \begin{align*} \E s_{1}^{1}s_{1}^{2}s_{1}^{3} & =\E\int\left\langle s\right\rangle _{\sigma^{1}}\cdot\left\langle s\right\rangle _{\sigma^{2}}\cdot\left\langle s\right\rangle _{\sigma^{3}}d\mu^{\tensor3}. \end{align*} Denote the integrand by \[ \cR(\sigma^{1},\sigma^{2},\sigma^{3})=\left\langle s\right\rangle _{\sigma^{1}}\cdot\left\langle s\right\rangle _{\sigma^{2}}\cdot\left\langle s\right\rangle _{\sigma^{3}}. \] The proof of this result will follow from the following two lemmas. \begin{lem} \label{lem:obvious}We have the following. \begin{enumerate} \item Suppose that $g(x,y)$ is a continuous, symmetric function. Then \[ \int g(R_{12},R_{13})dQ=\frac{1}{2}\int\int g(x,y)d\zeta(x)d\zeta(y)+\int g(x,x)d\zeta(x). \] \item Suppose that $f(x,y,z)$ is a continuous symmetric function with vanishing diagonal. Then \[ \int f(R_{12},R_{13},R_{23})dQ=\frac{3}{2}\int f(R_{12}\vee R_{13},R_{12}\wedge R_{13},R_{12}\wedge R_{13})dQ. \] \item Suppose that $f(x,y,z)$ is as above and such that $h(x,y)=f(x\vee y,x\wedge y,x\wedge y)$ is continuous. Then \[ \int f(R_{12},R_{13},R_{23})dQ=\frac{3}{4}\int\int h(x,y)d\zeta(x)d\zeta(y). \] \end{enumerate} \end{lem} \begin{proof} The first claim follows immediately from the Ghirlanda--Guerra identities. The last item is implied by the first two. It remains to prove the second claim. By symmetry of $f$ and ultrametricity we have that \begin{align*} \int f(R_{12},R_{13},R_{23})dQ & =3\int_{R_{12}>R_{13}}f(R_{12},R_{13},R_{13})dQ+\int_{R_{12}=R_{13}=R_{23}}f(R_{12},R_{12},R_{12})dQ \end{align*} The second term is zero by the vanishing diagonal property of $f$, so that, \[ RHS=3\int_{R_{12}\geq R_{13}}f(R_{12},R_{13},R_{13})dQ=\frac{3}{2}\int h(R_{12},R_{13})dQ, \] using again the vanishing diagonal property and the definition of $h$. \end{proof} \begin{lem} \label{lem:Three-spin-function-lem}There is a continuous, symmetric function of three variables defined on the set of ultrametric $[0,1]^{3}$ such that the function $\cR(\sigma^{1},\sigma^{2},\sigma^{3})=f(R_{12},R_{13},R_{23})$. This function has vanishing diagonal, and satisfies \begin{equation} f(a,b,b)=\E u_{x}(b,X_{b})^{2}u_{x}(a,X_{a})\label{eq:three-spin-function} \end{equation} for $a\leq b$. \end{lem} \begin{rem} This is to be compared with \cite[Eq. 34]{MV85}. \end{rem} \begin{proof} That it is a continuous, symmetric function of the overlaps is obvious. It suffices to show \prettyref{eq:three-spin-function}. To this end, observe that without loss of generality $R_{12}\geq R_{13}=R_{23}$. In this case, denoting $R_{12}=b$ and $R_{23}=R_{13}=a$, we have that \begin{align*} \cR(\sigma^{1},\sigma^{2},\sigma^{3}) & =\E u_{x}(1,X^{\sigma^{1}})u_{x}(1,X^{\sigma^{2}})u_{x}(1,X^{\sigma^{3}})\\ & =\E u_{x}(b,X_{b}^{1})u_{x}(b,X_{b}^{2})u_{x}(b,X_{b}^{3})\\ & =\E u_{x}(b,X_{b}^{1})^{2}u_{x}(b,X_{b}^{3})\\ & =\E u_{x}(b,X_{b}^{1})^{2}u_{x}(a,X_{a}^{3})\\ & =\E u_{x}(b,X_{b})^{2}u_{x}(a,X_{a}). \end{align*} In the second line, we used independence and the martingale property. In the third line we used that the driving processes are identical in distribution until that time. In the fourth line we use the martingale property and independence of local fields again. The final result comes from the fact that the driving process for the three spins is equivalent until $a$. \end{proof} We can now prove the main result of this subsection: \begin{proof}[\textbf{\emph{Proof of \prettyref{thm:three-spin-calc}}}] Recall that \[ \E s_{1}^{1}s_{1}^{2}s_{3}^{3}=\E\left\langle \E\cR(\sigma^{1},\sigma^{2},\sigma^{3})\right\rangle . \] The result then follow by combining \prettyref{lem:Three-spin-function-lem} and part 3. of \prettyref{lem:obvious}. \end{proof} \appendices \section{Appendix\label{sec:Appendix}} \subsection{On the driving process and its descendants}\label{app:driving} We record here the following basic properties of the driving process, cavity field process, local field process, and magnetization process. \begin{lem} Let $U$ be a positive ultrametric subset of a separable Hilbert space that is weakly closed and norm bounded equipped with the restriction of the Borel sigma algebra. Let $B_{t}(\sigma)$ be the process defined in \eqref{eq:Bromw}. We have the following: \begin{enumerate} \item The covariance structure is positive semi-definite. \item There is a version of this process that is jointly measurable and continuous in time. \item For each $\sigma,$ $B_{t}(\sigma)$ has the law of a brownian motion so that stochastic integration with respect to $B_{t}(\sigma)$ is well-defined. \end{enumerate} \end{lem} \begin{proof} We begin with the first. To see this, simply observe that if $\alpha_{i}\in\R$, $(t_{i},\sigma_{i})$ are finitely many points in $[0,q_{*}]\times U$ and $\sigma_{*}\in U$ , then \begin{align*} \sum\alpha_{i}\alpha_{j}\left(t_{i}\wedge t_{j}\wedge(\sigma_{i},\sigma_{j})\right) & =\sum\alpha_{i}\alpha_{j}\int\indicator{s\leq t_{i}}\indicator{s\leq t_{j}}\indicator{s\leq(\sigma_{i},\sigma_{j})}ds\\ & \geq\sum\alpha_{i}\alpha_{j}\int\indicator{s\leq t_{i}}\indicator{s\leq t_{j}}\indicator{s\leq(\sigma_{i},\sigma_{*})}\indicator{s\leq(\sigma_{j},\sigma_{*})}ds\\ & =\norm{\sum\alpha_{i}\indicator{s\leq t_{i}\wedge(\sigma_{i},\sigma_{*})}}_{L^{2}}\geq0. \end{align*} We now turn to the second. Observe first that, since $[0,q_{*}]\times U$ is separable and $\R$ is locally compact, $B_{t}(\sigma)$ has a separable version. Furthermore, observe that $B_{t}(\sigma)$ is stochastically continuous in norm, that is as $(t,\sigma)\to(t_{0},\sigma_{0})$ in the norm topology, $P(\abs{B_{t}(\sigma)-B_{t_{0}}(\sigma_{0})}>\epsilon)\to0$. Thus since $U$ is weakly-closed and norm bounded it is compact in the weak topology. Thus it has a version that is jointly measurable by \cite[Theorem IV.4.1]{GikhmanSkorokhodVol1}. Note then, since the covariance of $B_{t}(\sigma)$ for fixed $\sigma$ is that of Brownian motion and $B_{t}(\sigma)$ is separable, it is in fact continuous by \cite[Theorem IV.5.2]{GikhmanSkorokhodVol1}. The third property was implicit in the proof of the second. \end{proof} We now observe the following consequence of the above proposition: \begin{cor} Let $U$ be a positive ultrametric subset of a separable Hilbert space that is weakly closed and norm bounded. Then the cavity field process, $Y_{t}(\sigma)$, the local field process, $X_{t}(\sigma)$, and the magnetization process, $M_{t}(\sigma)$, exist, are continuous in time and Borel measurable in $\sigma.$ \end{cor} In the above, the following observation regarding the infinitesimal generator of the above processes will be of interest. \begin{lem} Let $(\sigma^{i})_{i=1}^{n}\subset U$ where $U$ is as above. Then we have the following. \begin{enumerate} \item The driving process satisfies the bracket relation \[ \left\langle B(\sigma^{1}),B(\sigma^{2})\right\rangle _{t}=\begin{cases} t & t\leq(\sigma,\sigma')\\ 0 & t>(\sigma,\sigma') \end{cases}. \] \item The cavity field process satisfies the bracket relation \[ \left\langle Y(\sigma^{1}),Y(\sigma^{2})\right\rangle _{t}=\begin{cases} \xi'(t) & t\leq(\sigma^{1},\sigma^{2})\\ 0 & else \end{cases}. \] \item The local fields process satisfies the bracket relation \[ \left\langle X(\sigma^{1}),X(\sigma^{2})\right\rangle _{t}=\begin{cases} \xi'(t) & t\leq(\sigma^{1},\sigma^{2})\\ 0 & else \end{cases} \] and has infinitesimal generator \begin{equation} L_{t}^{lf}=\frac{\xi''(t)}{2}\left(\sum a_{ij}(t)\partial_{i}\partial_{j}+2\sum b_{i}(t,x)\partial_{i}\right)\label{eq:inf-gen-loc-field} \end{equation} where $a_{ij}(t)=\indicator{t\leq(\sigma^{i},\sigma^{j})}$ and $b_{i}(t,x)=\zeta([0,t])\cdot u_{x}(t,x).$ \end{enumerate} \end{lem} \begin{proof} We begin with the first claim. To see this, observe that by construction, \[ B_{t}(\sigma^{1})=B_{t}(\sigma^{2}) \] for $t\leq(\sigma^{1},\sigma^{2})$, thus the bracket above is just the bracket for Brownian motion. If $t>(\sigma^{1},\sigma^{2}):=q$, then the increments $B_{t}(\sigma^{1})-B_{q}(\sigma^{1})$ and $B_{t}(\sigma^{2})-B_{q}(\sigma^{2})$ are independent Brownian motions. This yields the second regime. By elementary properties of It\^o processes, we obtain the brackets for $Y_{t}$ and $X_{t}$ from this argument. It remains to obtain the infinitesimal generator for the local fields process. To this end, observe that if $f=f(t,x_{1},\ldots,x_{k})$ is a test function, then It\^o's lemma applied to the process $(X_{t}(\sigma^{i}))_{i=1}^{n}$ yields \begin{align*} df & =\partial_{t}f\cdot dt+\sum_{i}\partial_{x_{i}}f\cdot dX_{t}(\sigma^{i})+\frac{1}{2}\cdot\sum\partial_{x_{i}}\partial_{x_{j}}f\cdot d\left\langle X_{t}(\sigma^{i}),X_{t}(\sigma^{j})\right\rangle \\ & =\left(\partial_{t}f+\sum_{i}\partial_{x_{i}}f\cdot\left(\xi''(t)\zeta(t)u_{x}(t,X_{t}(\sigma^{i})\right)+\frac{\xi''}{2}\sum\indicator{t\leq(\sigma^{i},\sigma^{j})}\partial_{x_{i}}\partial_{x_{j}}f\right)dt+dMart \end{align*} where $dMart$ is the increment for some martingale. Taking expectations and limits in the usual fashion then yields the result. \end{proof} \subsection{The Cavity Equations and Ghirlanda-Guerra Identities}\label{app:cavity} In this section, we recall some definitions for completeness. For a textbook presentation, see \cite[Chapters 2 and 4]{PanchSKBook}. Let $\cM$ be the set of all measures on the set $\{-1,1\}^{\N\times\N}$ that are exchangeable, that is, if $(s_i^\ell)$ has law $\nu\in\cM$, then \[ (s_{\pi(i)}^{\rho(\ell)})\eqdist (s_{i}^{\ell}) \] for any permutations $\pi,\rho$ of the natural numbers. The Aldous-Hoover theorem \cite{AldExch83,Hov82}, states that if $(s_{i}^{\ell})$ is the random variable induced by some measure $\nu\in\cM$, then there is a measurable function of four variables, $\sigma(w,u,v,x)$, such that \[ (s_{i}^{\ell})\eqdist(\sigma(w,u_{\ell},v_{i},x_{\ell i})) \] where $w,u_{\ell},v_{i},x_{\ell i}$ are i.i.d. uniform $[0,1]$ random variables. We call this function a \emph{directing function} for $\nu$. The variables $s_{i}^{\ell}$ are called the spins sampled from $\nu$. For any $\nu$ in $\cM$ with directing function $\sigma$, let $\bar{\sigma}(w,u,v)=\int\sigma(w,u,v,x)dx$. Note that since $\sigma$ is $\{\pm1\}$-valued, this encodes all of the information of $\sigma(w,u,v,\cdot )$. Define the measure $\mu$ \,on the Hilbert space, $\cH=L^{2}([0,1],dv)$, by the push-forward of $du$ through the map $u\mapsto\bar{\sigma}(w,u,\cdot)$, \[ \mu=(u\mapsto\bar{\sigma}(w,u,\cdot))_{*}du. \] The measure $\mu$ is called the asymptotic Gibbs measure corresponding to $\nu$. A measure $\nu$ in $\cM$ is said to satisfy the Ghirlanda-Guerra identities if the law of the overlap array satisfies the following property: for every $f\in C([-1,1]^{n})$ and $g\in C([-1,1])$, we have \begin{equation} \E\left\langle f(R^{n})\cdot g(R_{1,n+1})\right\rangle =\frac{1}{n}\left[\E\left\langle f(R^{n})\right\rangle \cdot\E\left\langle g(R_{12})\right\rangle +\sum_{k=2}^{n}\E\left\langle f(R^{n})\cdot g(R_{1k})\right\rangle \right],\label{eq:GGI} \end{equation} where by the bracket, $\left\langle \cdot\right\rangle $, we mean integration against the relevant products of $\mu$ with itself. A measure $\nu$ is said to satisfy the cavity equations if the following is true. Fix the directing function $\sigma$ and $\bar{\sigma}$ as above. Let $g_{\xi'}(\bar{\sigma})$ denote the centered Gaussian process indexed by $L^2([0,1],dv)$ with covariance \[ \E \bigg[g_{\xi'}\big(\bar{\sigma}(w,u,\cdot)\big)g_{\xi'}\big(\bar{\sigma}(w,u',\cdot)\big)\bigg] = \xi'\bigg(\int\bar{\sigma}(w,u,v)\bar{\sigma}(w,u',v)dv\bigg) \] and let $G_\xi'(\bar{\sigma})=g_{\xi'}(\bar{\sigma})+z(\xi'(1)-\xi'(\norm{\bar{\sigma}(w,u,\cdot)}_{L^2(dv)}^2))^{1/2}$. Let $g_{\xi',i}$ and $G_{\xi',i}$ be independent copies of these processes. Let $n,m,q,r,l\geq1$ be such that $n\leq m$ and $l\leq q$. Let $C_l \subset [m]$ and let $C_l^1=C_l\cap[n]$ and $C^2_l=C_l\cap(n+[m])$. Let \[ U_{l}=\int\E'\prod_{i\in C_{l}^{1}}\tanh G_{\xi',i}(\bar{\sigma}(w,u,\cdot)\prod_{i\in C_{l}^{2}}\bar{\sigma}_{i}\cE_{n,r}du \] where $\E'$ is expectation in $z$, $\bar{\sigma}_{i}=\bar{\sigma}(w,u,v_{i}),\theta(t)=\xi'(t)t-\xi(t)$, and where \[ \cE_{n,r}=\exp\left(\sum_{i\leq n}\log\cosh(G_{\xi',i}(\bar{\sigma}(w,u,\cdot))+\sum_{k\leq r}G_{\theta,k}(\bar{\sigma}(w,u,\cdot))\right). \] Let $V=\E'\cE_{n,r}$. The \emph{cavity equations} for $n,m,q,r\geq1$ are then given by \begin{equation} \E\prod_{l\leq q}\E'\prod_{i\in C_{l}}\bar{\sigma}_{i}=\E\frac{\prod_{l\leq q}U_{l}}{V^{q}}.\label{eq:cavity-eq} \end{equation} \bibliographystyle{plain} \bibliography{localfields} \end{document}
0.004215
Today, the 1 In 3 Campaign is hosting a six-hour livestream dedicated to promoting abortion, to celebrate the 43rd Anniversary of Roe v.... Live Action News publishes pro-life news and commentary from a pro-life perspective. Learn More Finley’s parents are grateful abortion pill reversal saved their little girl This viral video is changing minds about abortion 10 reasons abortion is not the answer
0.103157
TITLE: Generating function: Solving a recursive equation QUESTION [0 upvotes]: I've got following recursive equation to solve: $$a_n = \begin{cases} 5, & n = 0 \\ 3, & n = 1 \\ 5n^2 - 6n - 4 + a_{n-1}, & n \ge 2 \end{cases}$$ I also found the resulting generating function (I think that should be correct): $$F(x) = z^2F(x) + \frac{5(x+1)}{(1-x)^3} - \frac{6}{(1-x)^2} + \frac{2}{1-x} +8x + 9$$ Edit: Which of course would give me: $$F(x) = \left(\frac{1}{1-x^2}\right)\times \left( \frac{5(x+1)}{(1-x)^3} - \frac{6}{(1-x)^2} + \frac{2}{1-x} +8x + 9\right)$$ How should I go forward? Any help is greatly appreciated! REPLY [0 votes]: You ask for a generating function. So define $A(z) = \sum_{n \ge 0} a_n z^n$, shift your recurrence (no subtraction in indices is nicer). Then multiply by $z^n$ and sum over $n \ge 0$, recognize some sums: $\begin{align*} a_{n + 1} &= a_n + 5 n^2 + 4 n - 5 \\ \sum_{n \ge 0} a_{n + 1} z^n &= \sum_{n \ge 0} a_n z^n + 5 \sum_{n \ge 0} n^2 z^n + 4 \sum_{n \ge 0} n z^n - 5 \sum_{n \ge 0} z^n \\ \frac{A(z) - a_0}{z} &= A(z) + 5 \frac{z + z^2}{(1 - z)^3} + 4 \frac{z}{(1 - z)^2} - 5 \frac{1}{1 - z} \end{align*}$ The sums at the end are the geometric sum and use of the observation: $\begin{align*} \sum_{n \ge 0} n u_n z^n &= z \frac{d}{d z} \sum_{n \ge 0} n u_n z^n \end{align*}$ With the initial value this can be solved for $A(z)$: $\begin{align*} A(z) &= \frac{5 - 20 z + 34 z^2 - 9 z^3}{(1 - z)^4} \end{align*}$ Using the generalized binomial theorem: $\begin{align*} (1 - u)^{-m} &= \sum_{n \ge 0} (-1)^n \binom{-m}{n} u^n \\ &= \sum_{n \ge 0} \binom{n + m - 1}{m - 1} u^n \end{align*}$ we get the coefficients, for $n \ge 3$: $\begin{align*} [z^n] A(z) &= (5 [z^n] - 20 [z^{n - 1}] + 34 [z^{n - 2}] - 9 [z^{n - 3}]) (1 - z)^{-4} \\ &= 5 \binom{n + 4 - 1}{4 - 1} - 20 \binom{n - 1 + 4 - 1}{4 - 1} + 34 \binom{n - 2 + 4 - 1}{4 - 1} - 9 \binom{n - 3 + 4 - 1}{4 - 1} \\ &= \frac{10 n^3 + 27 n^2 - 13 n}{6} \end{align*}$ For the first values note that the coefficients of negative powers of $z$ are 0. Or ask your friendly CAS to expand $A(z)$ by Maclaurin series, or use the recurrence: $a_0 = 5, a_1 = 4, a_2 = 27, a_3 = 79, \ldots$.
0.386371
\begin{document} \title{Homological mirror symmetry of Fermat polynomials} \author{So Okada\footnote{Supported by JSPS Grant-in-Aid \#21840030 and Kyoto University Global Center Of Excellence Program; Email: okada@kurims.kyoto-u.ac.jp; Address: Research Institute for Mathematical Sciences, Kyoto University, 606-8502 Kyoto Japan.}} \maketitle \begin{abstract} We discuss homological mirror symmetry of Fermat polynomials in terms of derived Morita equivalence between derived categories of coherent sheaves and Fukaya-Seidel categories (a.k.a. perfect derived categories \cite{Kon09} of directed Fukaya categories \cite{Sei08,Sei03,AurKatOrl08}), and some related aspects such as stability conditions, (kinds of) modular forms, and Hochschild homologies. \end{abstract} \section{Introduction} Homological mirror symmetry was introduced by Kontsevich \cite{Kon95} as the categorical equivalence of so-called A- and B- models of topological field theories. For a pair of manifolds $X$ and $Y$ possibly with group actions, homological mirror symmetry is an equivalence of triangulated categories of coherent sheaves of $X$ and Lagrangians of $Y$ with natural enhancements to differential graded (dg) categories (see \cite{Kel06}). We have seen such equivalences for elliptic curves \cite{PolZas}, the quartic surface case \cite{Sei03}, degenerating families of Calabi-Yau varieties and abelian varieties \cite{KonSoi01}. The framework has been extended to Fano varieties and singularities in \cite{AurKatOrl08, AurKatOrl06,Sei01}. See \cite{HKKPTVVZ} for a comprehensive source of references. Let $X_{n}$ be the function $F_{n}:\bC^{n}\to \bC$ for the Fermat polynomial $F_{n}:=x_{1}^{n}+\cdots+x_{n}^{n}$, and $Y_{n}$ be $F_{n}:\bC^{n}/G_{n}\to\bC$ with the group $G_{n}:=\{(\xi_{k})_{k=1\ldots n}\in \bC^{n} \mid \xi_{k}^{n}=1\}$ acting on $\bC^{n}$ as $(x_{k})_{k=1\ldots n}\in \bC^{n}\mapsto (\xi_{k}x_{k})_{k=1\ldots n}\in \bC^{n}$. For Morsifications of our functions, we define Fukaya-Seidel categories as perfect derived categories of $A_{\infty}$ categories (see \cite{Kel01}) of ordered vanishing cycles (Lagrangian spheres) \cite{Sei08,Sei03,AurKatOrl08}. For $X_{n}$, we define $\Db(\Coh X_{n})$ as the perfect derived category of coherent sheaves on its fiber over zero in the projective space of $\bC^{n}$ (so, it gives the bounded derived category of the coherent sheaves of the Fermat hypersurface in $\bP^{n-1}$). By the same logic, we define $\Db(\Coh Y_{n})$, which gives the bounded derived category of coherent sheaves on the Fermat hypersurface in $\bP^{n-1}$ as the stack with respect to the action of $H_{n}:=G_{n}/\mbox{diagonals}$. Between derived categories such as $\Db(\Coh Y_{n})$ and $\FS(X_{n})$, we prove derived Morita equivalences in the sense that we take compact generating objects on both sides and dg categories (derived endomorphisms) of the objects in some dg enhancements, and find them isomorphic. Homological mirror symmetry comes with the canonical mirror equivalence. We put $\FS(Y_{n})$, as a quotient of $\FS(X_{n})$ by the dual group $\hat{H}_{n}$ of $H_{n}$. In $\FS(X_{n})$, the $A_{\infty}$ category of ordered vanishing cycles for a Morsification of $X_{n}$ is $A_{\infty}$ isomorphic to a dg category. We take the dg category of a compact generating object consisting of these vanishing cycles such that $\hat{H}_{n}$ acts as an automorphism group on the category of the dg category. We put Fukaya-Seidel category $\FS(Y_{n})$ as the perfect derived category of the dg $\hat{H}_{n}$-orbit category \cite{Kel05} of the dg category. We take the dg category of a $H_{n}$-invariant compact generating object of $\Db(\Coh X_n)$ in some dg enhancement and find the derived Morita equivalence between $\Db(\Coh X_{n})$ and $\FS(Y_{n})$, comparing the dg $\hat{H}_{n}$-orbit category and the dg category. The following is a summary: \begin{align*} \begin{array}{ccc} \Db(\Coh Y_{n}) & \cong & \FS(X_{n})\\ \uparrow\mbox{equivariance by $H_{n}$} & & \downarrow\mbox{ quotient by $\hat{H}_{n}$}\\ \Db(\Coh X_{n}) &\cong & \FS(Y_{n}). \end{array} \end{align*} For recent discussion on some aspects of homological mirror symmetry, see \cite{KapKreSch, Kon09} for references. \section{Derived Morita equivalence}\label{sec:Mor} Let us say equivalence instead of derived Morita equivalence. Let $A_{n-1}$ be the Dynkin quiver of $A$ type with $n-1$ vertices and one-way arrows. The perfect derived category $\Db(\mo A_{n-1})$ is equivalent to the category of graded B-branes $\DGrB(x^{n})$ (see \cite{Orl,KajSaiTak}), which has the natural dg enhancement in terms of graded matrix factorizations. Hochschild cohomology of the path algebra of $A_{n-1}$ is trivial \cite{Hap}, so is that of their tensor products. For Auslander-Reiten transformation $\tau$ of $\DGrB(x^n)$ and the projective-simple module $E$ of $\mo A_{n-1}$, $\DGrB(x^n)$ is equivalent to the perfect derived category of the dg category $\cA_{n-1}:=\End^{*}(\oplus_{\mu=0\ldots n-1}\tau^{-\mu}(E))$, which is generated by degree-zero idempotents and, if any, degree-one nilpotents. For the function $x^{n}:\bC \to \bC$, the perfect derived category of $\cA_{n-1}$ is equivalent to $\FS(x^{n}):=\FS(x^{n}:\bC\to\bC)$ \cite[2B]{Sei01}. Ordered objects $E, \ldots, \tau^{2-n}(E)$ represent ordered vanishing cycles for a Morsification of the function. Let us mention that they may represent ones for the Morsification of $x^{n}+y^2:\bC^{2}\to\bC $ in \cite{ACa} (see \cite[Section 3]{Sei01}). The object $\tau^{1-n}(E)$ represents a connected sum of vanishing cycles. Up to $A_{\infty}$ isomorphisms, the tensor product of the graded commutative algebra $\cA_{n-1}^{\otimes n}$ admits no non-trivial $A_{\infty}$ structures. K\"{u}nneth formula for vanishing cycles of Fukaya-Seidel categories on morphisms and compositions \cite[Proposition 6.3, Lemma 6.4]{AurKatOrl08} implies that the perfect derived category of $\cA_{n-1}^{\otimes n}$ is equivalent to a full subcategory of $\FS(X_n)$. By the classical theory of singularity, vanishing cycles represented as objects of $\cA_{n-1}^{\otimes n}$ compose of a generating object of $\FS(X_{n})$ (see \cite[Conjecture 1.3]{AurKatOrl08} and \cite{Oka,SebTho} for related studies). Let us explain an equivalence between $\Db(\Coh Y_{n})$ and $\Db(\mo A_{n-1})^{\otimes n}$. By \cite{Orl}, $\Db(\Coh X_n)$ is equivalent to the category of graded B-branes $\DGrB(F_n)$, which has the natural dg enhancement in terms of graded matrix factorizations and the shift functor $\tilde{\tau}$ such that $\tilde{\tau}^{n}\cong [2]$. Let $\Omega_{n-1}:=\oplus_{\mu=0\ldots n-1}\tilde{\tau}^{-\mu}(\cO_{X_{n}})$. With explicit descriptions given by graded matrix factorizations for each summand of $\Omega_{n-1}$ (see \cite[Section IV]{Asp07}), we have the following. The object $\Omega_{n-1}$ is isomorphic to the compact generating object $\oplus_{\mu=0\ldots n-1}\Omega_{\bP^{n-1}}^{n-1-\mu}(n-\mu)[-\mu]|_{X_{n}}$ of $\Db(\Coh X_{n})$. The object $\Omega_{n-1}$ is invariant under $H_{n}$ actions up to canonical isomorphisms $\hat{H}_{n}$, and so $H_{n}$ equivariant objects of $\Omega_{n-1}$ compose of a generating object of $\Db(\Coh Y_{n})$. For the dg category of $\Omega_{n-1}$ in the natural dg enhancement of $\DGrB(X_{n})$, the $H_{n}$-equivariant dg category of the dg category of $\Omega_{n-1}$ coincides with $\cA_{n-1}^{\otimes n}$. Let us explain an equivalence between $\Db(\Coh X_{n})$ and $\FS(Y_{n})$. Morphism spaces of the dg category of $\Omega_{n-1}$ are representations of $H_{n}$ and compositions of morphisms are multiplications of matrices. In the sense of \cite{CibMar}, by the action of $H_{n}$, morphism spaces of the dg category of $\Omega_{n-1}$ are $\hat{H}_{n}$-graded. For a finite group $G$ of dg functors of a dg category $\cA$ inducing an equivalence on $H^{0}(\cA)$, we have the dg orbit category \cite{Kel05} $\cB=\cA/G$ such that dg categories $\cA$ and $\cB$ have the same objects and for objects $X$ and $Y$ in $\cB$, $\cB (X,Y)=\colim_{g\in G}\oplus_{f\in G} \cA (f(X), g\circ f(Y))$. By applying \cite[Proposition 3.2]{CibMar}, the $\hat{H}_{n}$-orbit category of the category of $\cA_{n-1}^{\otimes n}$ is equivalent to the category of the dg category of $\Omega_{n-1}$. Differentials have not been altered by taking dg equivariance and orbits. The dg category of $\Omega_{n-1}$ in $\DGrB(X_{n})$ is equivalent to the dg $\hat{H}_{n}$-orbit category of $\cA_{n-1}^{\otimes n}$. Explicitly, we define $\FS(Y_{n})$ as the perfect derived category of the dg $\hat{H}_{n}$-orbit category. The perfect derived category of the dg category of $\Omega_{n-1}$ in $\DGrB(X_{n})$ is equivalent to $\Db(\Coh X_{n})$, and by the construction, $\FS(Y_{n})$ is equivalent to $\Db(\Coh X_{n})$. So, we have the following. \begin{thm}\label{thm:main} For $n>0$, $\Db(\Coh Y_n)$ is derived Morita equivalent to $\FS(X_n)$ and $\Db(\Coh X_n)$ is derived Morita equivalent to $\FS(Y_n)$. \end{thm} \section{Discussion} Let us start off with some aspects of the dg category of $\Omega_{n-1}$, which is generated by degree-zero idempotents representing summands and, if any, degree-one closed differential forms. The dg category of $\Omega_{n-1}$ together with its deformation theory would be related to the notion of Gepner model in physics (see \cite{Asp05,HerHorPag}). In \cite[Section 5.1]{HorWal}, explicit forms of deformations of summands of the dg category of $\Omega_{4}$ have been computed along the deformation flow which is compatible with the actions of $(\overline{\xi}_{1},\ldots, \overline{\xi}_{5})$ in $H_{5}$ subject to $\overline{\xi}_{1}\cdots \overline{\xi}_{5}=1$. Each summand of $\Omega_{n-1}$ by the definition represents a Lagrangian among $\hat{H}_{n}$ isomorphic ones. In $\FS(x^{n})$, in the notation of the previous section, we observe that we count the intersection of the connected sum $\tau^{1-n}(E)$ with vanishing cycles $E$ and $\tau^{2-n}(E)$. We have non-trivial homomorphisms from $\tau^{1-n}(E)[1]$ to $E$ and from $\tau^{2-n}(E)$ to $\tau^{1-n}(E)[1]$ and trivial ones with $\tau^{-\mu}(E)$ for $0<\mu<n-2$. For $0 \leq \mu<\mu'\leq n-1$ and Euler paring $\chi$, $\chi(\tilde{\tau}^{-\mu}(\cO_{X_{n}})[\mu], \tilde{\tau}^{-\mu'}(\cO_{X_{n}})[\mu'])$ counts intersections for each Lagrangian represented by $\tilde{\tau}^{-\mu}(\cO_{X_{n}})$ with Lagrangians represented by $\tilde{\tau}^{-\mu'}(\cO_{X_{n}})$. \subsection{Two types of stability conditions} Let us discuss the notion of stability conditions on triangulated categories \cite{BRI,GorKulRud,KonSoi08}. From a view point, a stability condition on a triangulated category is a collection of objects, which are called stable and has a partial order with indices called slopes, such that for consecutive stable objects $E \geq F$ with a non-trivial homomorphism from $E$ to $F[1]$ we have consecutive stable objects $F[1]\geq E$. For reasonable such collections, we have taken extension-closed full subcategories consisting of some consecutive stable objects to study wall-crossings, moduli problems, Donaldson-Thomas type invariants, and so on. For the quintic, a few types of stability conditions have been studied in mathematics and physics. First type of stability conditions consists of variants of Gieseker-Maruyama stabilities such as \cite{Bay,Ina,Tod}. Slopes of stable objects are given by appropriate approximations of Hilbert polynomials. These stability conditions are compatible with the tensoring by the ample line bundle, which is called the monodromy around the large radius limit (see \cite[Section F]{Asp07}). Second type of stability conditions has been studied in terms of Douglas' $\Pi$-stabilities for Gepner point \cite{AspDou,Dou02, Dou01}. With our view point, the simplest stability condition compatible with $\tilde{\tau}$, which is called the monodromy around Gepner point (see \cite[Section F]{Asp07}), and with complexfied numerical classes of vanishing cycles, is as follows. For $\mu\in \bZ$, stable objects consist of objects $\tilde{\tau}^{-\mu}(\cO_{X_{5}})$ and the order of them is given by the clock-wise order of $\exp(-\mu\frac{2\pi i}{5})$. The extension-closed full subcategory of the consecutive stable objects $\cO_{X_{5}}>\tilde{\tau}^{-1}(\cO_{X_{5}})$ of the stability condition of the second type gives the graded generalized Kronecker quiver with degree-one arrows from $\cO_{X_{5}}$ to the other, together with the same number of degree-two arrows in the opposite way and a degree-three loop on each vertex. For some vanishing cycle represented by $\cO_{X_{5}}$, the number of degree-one arrows represents the number of intersections of the vanishing cycle with vanishing cycles represented by the other object and coincides with the Donaldson-Thomas invariant of the moduli space of the numerical class which is the sum of the ones of the two stable objects. The one of the three consecutive stable objects $\cO_{X_{5}}>\tilde{\tau}^{-1}(\cO_{X_{5}})>\tilde{\tau}^{-2}(\cO_{X_{5}})$ gives two copies of the quiver glued, together with degree-two arrows from $\cO_{X_{5}}$ to $\tilde{\tau}^{-2}(\cO_{X_{5}})$ and the same number of degree-one arrows in the opposite way and a degree-three loop on each vertex. The objects $\cO_{X_{5}}$, $\tilde{\tau}^{-1}(\cO_{X_{5}})$, and $\tilde{\tau}^{-2}(\cO_{X_{5}})$ make cluster collections \cite{KonSoi09, KonSoi08}. The stability condition of the second type is equipped with the consecutive stable objects $\tilde{\tau}^{-1}(\cO_{X_{5}})[1]>\cO_{X_{5}}$. The extension-closed full subcategory of the pair gives the graded generalized Kronecker quiver with degree-zero arrows in one way, together with the same number of degree-three arrows in the opposite way and a degree-three loop on each vertex, which coincides with that of the consecutive stable objects $\cO_{X_{5}}(1)>\cO_{X_{5}}$ of the simplest stability condition of the first type on the sequence $\cO_{X_{5}}(\mu)$ for $\mu \in \bZ$. \subsection{(kinds of) Modular forms} For a stability condition of either type, objects in the extension-closed full subcategory consisting of stable objects of a slope are said to be semistable. Let us count semistable objects in the extension-closed full subcategory consisting of a stable object for the stability condition of the second type. The same counting logic applies when we have a single stable spherical object of some slope. As in \cite{MelOka}, we are interested in (kinds of) modular forms and Donaldson-Thomas type invariants. For the motive of the affine line $\bL$, we represent the formal symbol $\bL^{\frac{1}{2}}$ by $-q^{\frac{1}{2}}$, which conjecturally corresponds to so-called graviphoton background in the supergravity theory \cite{DimGuk,DimGukSoi}. We obtain $\cJ_{m}(q):= \frac{1}{m}\frac{q^{\frac{m}{2}}}{(1-q^{m})}$ for $m>0$ as the coefficient of $x^m$ in the formal logarithm of the quantum dilogarithm $\sum_{m\geq 0}\frac{(-q^{\frac{1}{2}})^{m^2}}{(q^{m}-1)\cdots (q^{m}-q^{m-1})}x^m$ \cite{KonSoi09,KonSoi08}. The generating function $\sum_{n>0}m^{1-k}\cJ_{m}(q)= \sum_{m,r>0} m^{-k}q^{\frac{mr}{2}}-m^{-k}q^{mr}$ consists of Eisenstein series for negative odd $k$, or Eichler integrals \cite{LawZag} of Eisenstein series for positive odd $k$. In particular, for $k=-1$, $q=\exp(2\pi i z)$, and Eisenstein series $E_{2}(z)$, we obtain the quasimodular form $E_{2}(z)-E_{2}(2z)$ \cite{KanZag}. \subsection{Hochschild homologies} In $\bP^{n-1}$, $Y_{n}$ is the same as $\bP^{n-2}$ given by $x_{1}+\cdots+x_{n}=0$ with the orbifold structures of degree $n$ along coordinate hypersurfaces. By computing Poincar\'{e} polynomial of $Y_{n}$, which can be partitioned according to degrees of orbifold structures, we obtain $P(Y_{n},q)= \sum_{2 \leq j \leq n, 2 \leq k \leq j} n^{n-j} \binom{n}{j} \ (-1)^{j-k} P(\bP^{k-2},q)$. For example, we have $1$, $q+7$, $q^2+13 q+67$, and $q^3+21 q^2+181q+821$. In particular, by computing Euler characteristics in terms of Hochschild homologies (see \cite{Kel98, HocKosRos}), we obtain $P(Y_n,1)=(n-1)^{n}$. \section{Acknowledgments} The author thanks Australian National University, Institut des Hautes \'Etudes Scientifiques, and Research Institute for Mathematical Sciences for providing him postdoctoral support. He thanks Professors Bridgeland, Fukaya, Hori, Katzarkov, Keller, Kimura, Kontsevich, Lazaroiu, Mizuno, Moriyama, Nakajima, Neeman, Polishchuk, Seki, A. Takahashi, Terasoma, Usnich, Yagi, and Zagier for their useful discussions. In particular, he thanks Professor Kontsevich for his generous and inspiring discussions.
0.001244
Japanese beauty brands accelerate efforts in Asia amid growing popularity Japanese publication Nihon Keizin Shimbun reported that Shiseido has plan to increase sales of its eponymous brand by 50% to ¥200bn ($1.77) by 2020. Meanwhile, the Nikkei Asian Review reported that its rival, Kao, already has plans in place to challenge Shiseido’s market share in China by its premium skin care offerings. Speaking at the TFWA (Tax Free World Association) World Exhibition and Conference, Philippe Lesné, Shiseido Travel Retail President and CEO, suggested that Japanese beauty has yet to reach its peak potential. “Despite its long heritage and traditions, Japanese Beauty is really only now starting to blossom on the global stage,” said Lesné. “Built on foundations of a very strong domestic beauty market in Japan and a lot of attention on Japan as a destination and cultural reference, the Japanese Beauty segment represents a huge opportunity…” Stepping up on marketing efforts In order to achieve its ambitious target, Nihon Keizin Shimbun said that Shiseido will increase its marketing efforts in Asia. Just last week, Shiseido announced that it was creating a “ground-breaking” art installation at Jewel Changi Airport, an upcoming retail development and tourist destination in Singapore. A preview of the interactive ‘S E N S E’ art installation was unveiled to attendees during the TFWA (Tax Free World Association) World Exhibition and Conference. According to a statement by the company, the art installation is meant to “invite the visitors to discover the invisible, spiritual and meaningful Japanese sense of beauty through art.” “This partnership links to our strategic approach to create more meaningful and engaging experiences for our customers, beyond the realms of traditional retail,” said Lesné. “As a brand, Shiseido is perfectly captured within the Forest Valley experience.” Before that, the company named Chinese actress Zhang Ziyi as the latest global brand ambassador for Clé de Peau Beauté in September to further establish the luxury brand in China. Last year, Shiseido exceeded it 2020 sales target of ¥1 trillion yen ($8.8bn), three years ahead of schedule. According to the 2017 annual report, China alone generated ¥144.3bn ($1.3bn) of sales for the company, making it Shiseido’s largest overseas market, and second only to Japan itself. Shiseido’s prestige brands made up the lion’s share of the sales at 42%. Chinese market saw 20.1% growth year-on-year, while Asia Pacific grew 11.2%. Kao’s high-end plans This May 2018, Kao created a New Global Portfolio as part of its new strategy to grow its cosmetic business, where performance have been lacklustre. Sales of cosmetics declined 4.8% year-on-year while sales of skin care and hair care fell by 1%. Together, the company posted net sales of ¥586bn ($5.2bn) for its beauty division. In a statement, Kao shared that it planned to rejuvenate its beauty offerings by placing emphasis on its prestige portfolio, which includes brands such as Sensai, Kanebo, RMK, Suqqu and SOFINA. Last month, Kao launched the Sofina iP range of skin care in China after the line proved to be successful in Taiwan, Hong Kong and Singapore. A representative for Kao told Cosmetics Design Asia that Sofina IP has been doing well since it launched in China and has even noticed a rise in new customers. Currently, Kao has plans bring two of its biggest luxury brands to China. Sensai will be launched in 2020, which Kanebo is slated to arrive later after 2022. However, the rep notes that the latter is “not fixed yet”. Like Kao, South Korean beauty conglomerate Amorepacific turned its efforts on pushing out its luxury products after posting a shocking loss in 2017. For 2018, the company positioned itself as a global luxury brand, and made sure to enhance the position of Sulwahsoo, its premium skin care line, by positioning it as a leading luxury brand. This strategy helped Amorepacific rebound from its slump in the second quarter this year.
0.974605
Dear all, What is my goal? I have a huge library of music scores. The scores printed on A4 are already scanned through a Brother ADS-2100 This works very well. ============ The books are another problem. I have an Iphone and Ipad pro. This is my plan: Scan a book - format max. A3 - with Phone or Pad (I’ll build a construction for scanning.) The software splits the pages vertically AND sorts them in the right order. Option1: fingers left and right are deleted. Option2: curving of the pages is corrected OCR is not needed. ========= Is there any software solution? What is your experience with Bookscanner & Office Lens (both Mac app store)? I found the perfect application: Booksorber, but this doesn’t seem to respond any more for several years and it refuses to run on mac (despite the needed installation of Java). ===== Thanks for your useful advise, Freder Scanning music books General discussion about software packages and releases, new software you've found, and threads by programmers and script writers. Post Reply 3 posts • Page 1 of 1 - Posts: 132 - Joined: 21 Sep 2016, 10:51 - E-book readers owned: Tolino Shine - Country: Germany - Location: Frankfurt/Main, Germany Re: Scanning music books The software solution for splitting pages and cleaning shadows and so, is Scan Tailor. The software solution for musical OCR is capella Scan -- see The software solution for musical OCR is capella Scan -- see - Posts: 3 - Joined: 22 Jun 2018, 07:51 - E-book readers owned: No - Number of books owned: 12 - Country: Belgium Re: Scanning music books Thanks for your help, but I have a Mac-platform. still searching... but I have a Mac-platform. still searching... Post Reply 3 posts • Page 1 of 1
0.001875
TITLE: Explain this generating function QUESTION [2 upvotes]: I have a task: Explain equation: $$\prod_{n=1}^{\infty}(1+x^nz) = 1 + \sum_{n=m=1}^{\infty}\lambda(n,m)x^nz^m $$ $\lambda(n,m)$ - is number of breakdown $n$ to $m$ different numbers (>0) It's hard to me. The only thing that I can explain is: $\prod_{n=1}^{\infty}(1+x^nz) = (1+xz)(1+x^2z)(1+x^3z)... $ It has to be any generating function. For example for string: $(1, z, 0,0,0,0,...)$ $(1,0,z,0,0,0,...) $ $(1,0,0,z,0,0,....)$ $$...$$ So far I have use only generating function and I have one variable $x$. Here I have two variables: $x$ and $z$. I don't understand this. This task is very difficult to me. Could somebody help me ? REPLY [0 votes]: Multiplying formal power series in multiple indeterminates is not really much more complicated than doing so for univariate ones. The main difference is that each term has not a single degree, but separate degrees for each of the $k$ indeterminates, so the coefficient is attached to a point of $\def\N{\Bbb N}\N^k$ rather than of$~\N$. With an infinite product like this one, the first important point is that in the expansion, only those products of terms of the individual factors contribute that are well defined (have finite degrees), which amounts to saying one is allowed to choose a term distinct from $1$ only finitely many times. So here one can choose for finitely many different values of $n$ the term $x^nz$, and the term $1$ for all other values of $n$. The product of these terms is $x^sz^m$, where $s$ is the sum of the values $n$ selected, and $m$ is their number. Each such term is contributed with coefficient$~1$, so it suffices to count the number of times a given term $x^sz^m$ is contributed. This is precisely the number of $m$-element subsets of $\N_{>0}$ with sum$~s$. Ordering the elements of such a subset in decreasing order, one obtains a partition of $s$ into $n$ distinct nonzero parts (also called a strict partition of $s$ with $n$ parts), which is apparently the interpretation you are after. Since parts are nonzero, $s$ can only be zero if $m$ is also, and the converse is obvious, so if one likes $\sum_{s,m\in\N}\lambda(s,m)x^sz^m$ can be rewritten as $1+\sum_{s,m\in\N_{>0}}\lambda(s,m)x^sz^m$ (it doesn't buy much though).
0.221624
Early bird tickets are now on sale for the 11th annual Cal Poly Pomona Tasting & Auction on Sunday, May 6. The event presents delicious food from dozens of popular Southern California restaurants, great wine and craft beer, entertainment, and live and silent auctions. At last year’s sold-out event, guests sampled fabulous food and beverages from approximately 55 partners, next to Cal Poly Pomona’s beautiful Rose Garden and Aratani Japanese Garden. Tasty treats included balsamic lamb chops, ahi poke tarts, beer ice cream, Louisiana crab cakes, pulled pork sliders, gourmet cupcakes and dulce de leche cheesecake pops and more. Beverage providers included Last Name Brewery, Deep Eddy Vodka, Innovation Brew Works and award-winning Horse Hill Vineyards. Regular Tasting partners have included Ralph Brennan’s Jazz Kitchen, Brio Tuscan Grille, RA Sushi, and the historic Galleano Winery. Early bird tickets are now available for $75 per person. Proceeds benefit Cal Poly Pomona and its 25,000 students. Guests must be 21 years of age and older. For more information or tickets, visit tasting.cpp.edu. For questions, call 909-869-4706.
0.012332
Appendix C Acronyms and Abbreviations AAP American Academy of Pediatrics ADA American Dietetic Association AI Adequate Intake AMDR Acceptable Macronutrient Distribution Ranges ARS Agricultural Research Service, United States Department of Agriculture ASCN American Society for Clinical Nutrition ATSDR Agency for Toxic Substance and Disease Registry BARC Beltsville Agricultural Research Center, United States Department of Agriculture BLS Bureau of Labor Statistics BMI body mass index C-SIDE C compiler version of SIDE, C program language is the source code for C-SIDE CDC Center for Disease Control and Prevention CDD Chlorinated Dibenzo-p-Dioxin CFR Code of Federal Regulations CFSAN Center for Food Safety and Applied Nutrition CNPP Center for Nutrition Policy and Promotion, United States Department of Agriculture CSFII Continuing Survey of Food Intakes by Individuals d day DHEW Department of Health, Education and Welfare DHHS Department of Health and Human Services DLC dioxin-like compounds DQI Dietary Quality Index DQI-R Dietary Quality Index Revised DRI Dietary Reference Intakes EAR Estimated Average Requirement EBT electronic benefit transfer EER Estimated Energy Requirement EPA Environmental Protection Agency ERS Economic Research Service, United States Department of Agriculture FASEB Federation of American Societies of Experimental Biology FDA Food and Drug Administration FITS Feeding Infants and Toddlers Study C-1 OCR for page 121 C-2 PROPOSED CRITERIA FOR SELECTING THE WIC FOOD PACKAGES FNB Food and Nutrition Board, Institute of Medicine FNS Food and Nutrition Service, United States Department of Agriculture FSIS Food Safety and Inspection Service FSRG Food Surveys Research Group, United States Department of Agriculture FY fiscal year g gram GAO General Accounting Office HEI Healthy Eating Index IOM Institute of Medicine, The National Academies ISU Iowa State University kcal kilocalorie kg kilogram L liter LSRO Life Sciences Research Office mcg microgram mg milligram mos months n number N/A not applicable from available data NAS National Academy of Sciences, The National Academies NAWD National Association of WIC Directors ND not determined NDL Nutrient Data Laboratory, United States Department of Agriculture NFCS Nationwide Food Consumption Survey NHANES National Health and Nutrition Examination Survey NIH National Institutes of Health, Department of Health and Human Services NRC National Research Council, The National Academies NWA National WIC Association oz ounce PA physical activity PAL physical activity level PHS Public Health Service P/L pregnant or lactating RAE retinol activity equivalent RDA Recommended Dietary Allowances SD standard deviation SIDE Software for Intake Distribution Estimation SKU stock-keeping unit tsp teaspoon UL Tolerable Upper Intake Level USC U.S. Code USDA U.S. Department of Agriculture VRG Vegetarian Resource Group WHO World Health Organization WIC Special Supplemental Nutrition Program for Women, Infants, and Children y year
0.748302
\begin{document} . \vskip -.5in \title{Inverting the Rational Sweep Map} \author{Adriano M. Garsia$^1$ and Guoce Xin$^2$} \address{ $^1$Department of Mathematics, UCSD \\ $^2$School of Mathematical Sciences, Capital Normal University, Beijing 100048, PR China} \email{$^1$\texttt{garsiaadriano@gmail.com}\ \ \&\small $^2$\texttt{guoce.xin@gmail.com}} \date{March 20, 2016} \begin{abstract} We present a simple algorithm for inverting the sweep map on rational $(m,n)$-Dyck paths for a co-prime pair $(m,n)$ of positive integers. This work is inspired by Thomas-Williams work on the modular sweep map. A simple proof of the validity of our algorithm is included. \end{abstract} \maketitle \section{The Algorithm} Inspired by the Thomas-William algorithm \cite{Nathan} for inverting the general modular sweep map, we find a simple algorithm to invert the sweep map for rational Dyck paths. The fundamental fact that made it so difficult to invert the sweep map in this case is that all previous attempts used only the ranks of the vertices of the rational Dyck paths. Moreover the geometry of rational Dyck paths was not consistent with those ranks. A single picture will be sufficient here to understand the idea. In what follows, we always denote by $(m,n)$ a co-prime pair of positive integers, South end (by letter $S$) for the starting point of a North step and West end (by letter $W$) for the starting point of an East step, unless specified otherwise. This is convenient and causes no confusion because we usually talk about the starting points of these steps. \begin{figure}[!ht] \centering{ \mbox{\dessin{width=3.5 in}{PAIR.pdf}} $$\bar D\hspace{2cm} \rightarrow \hspace{2cm} D$$ \caption{A rational $(7,5)$-Dyck path and its sweep map image.} \label{fig:SweepMap} } \end{figure} Figure \ref{fig:SweepMap} illustrates a rational $(m,n)$-Dyck path $\overline D$ for $(m,n)=(7,5)$ and its sweep map image $D$ on its right. Recall that the ranks of the starting vertices of an $(m,n)$-Dyck path $\overline D$ are recursively computed starting with rank $0$, and adding $m$ after a North step and subtracting $n$ after an East step as shown in Figure \ref{fig:SweepMap}. To obtain the Sweep image $D$ of $\overline D$, we let the main diagonal (with slope $n/m$) sweep from right to left and successively draw the steps of $D$ as follows: i) draw a South end (and hence a North step) when we sweep a South end of $\overline D $; ii) draw a West end (hence an East step) when we sweep a West end of $\overline D$. The steps of $D$ can also be obtained by rearranging the steps of $\overline D $ by increasing ranks of their starting vertices. The sweep map has become an active subject in the recent 15 years. Variations and extensions have been found, and some classical bijections turn out to be the disguised version of the sweep map. See \cite{sweepmap} for detailed information and references. The open problem was the reconstruction of the path on the left from the path on the right. The idea that leads to the solution of this problem is to draw these two paths as in Figure \ref{fig:SweepMapNew}. \begin{figure}[!ht] \begin{center} \mbox{\dessin{width=4 in}{BIGPAIR.pdf}} \caption{Transformation of the $(7,5)$-Dyck paths in Figure \ref{fig:SweepMap}.} \label{fig:SweepMapNew} \end{center} \end{figure} That is we first stretch all the arrows so that their lengths correspond to the effect they have on the ranks of the vertices of the path then add an appropriate clockwise rotation to obtain the two path diagrams in Figure \ref{fig:SweepMapNew}. The path diagrams are completed by writing an $S$ for each South end in our original path and a $W$ for each West end. On the left we have added a list of each level. The ranks of $\overline D$ become visually the levels of the staring points of the arrows. On the right, at each level we count the red (solid) segments and the blue (dashed)\footnote{Suggested by the referee, we have drawn blue dashed arrows for convenience of black-white print. We will only use ``red" and ``blue" in our transformed Dyck paths, but in our context, red, solid, up and positive slop are equivalent; blue, dashed, down and negative slop are equivalent.} segments which traverse that level and record their difference. Of course these differences (called row counts) turn out to be all equal to $0$, for obvious reasons. This will be referred to as the $0$-row-count property. Theorem \ref{t-characterization} states that this is a characteristic property of rational Dyck paths, which becomes evident when paths are drawn in this manner. This fact is conducive to the discovery of our algorithm for constructing the pre-image of any $(m,n)$-Dyck path. \begin{figure}[!ht] \centering{ \mbox{\dessin{width=3.8 in}{BEAUTY.pdf}} \caption{A given rational Dyck path and its starting path diagram on the right.} \label{fig:InitialRank} } \end{figure} The first step in our algorithm is to reorder the arrows of the path on the left of Figure 3, so that the ranks of their starting points are minimally strictly increasing. More precisely the first three red arrows are lowered in their columns to start at levels $0,1,2$. To avoid placing part of the first blue arrow below level $0$ we lower it to start at level $5$. This done all the remaining arrows are successively placed to start at levels $6,7,8,9,10,11,12,13$. Notice the row counts at the right of the resulting path diagram. Our aim is to progressively reduce them all to zeros, which are the row counts characterization of the path diagram we are working to reconstruct. \begin{figure}[!ht] \centering{ \mbox{\dessin{height=5 in}{BOOM1.pdf}}\\[2ex] \mbox{\includegraphics[height=2.43 in]{BOOM2.pdf}} \caption{Part 1 of the 18 path diagrams that our algorithm produced.} \label{fig:AlgorithmResult1} } \end{figure} \begin{figure}[!ht] \centering{ \mbox{\includegraphics[height=5 in]{BOOM3.pdf}} \caption{Part 2 of the 18 path diagrams that our algorithm produced.} \label{fig:AlgorithmResult2} } \end{figure} The miracle is that this can be achieved by a sequence of identical steps. More precisely, at each step of our algorithm we locate the lowest row sum that is greater than $0$. We next notice that there is a unique arrow that starts immediately below that row sum. This done we move that arrow one unit upwards. However, to keep the ranks strictly increasing we also shift, when necessary, some of the successive arrows by one unit upwards. In this particular case our MATHEMATICA implementation of the resulting algorithm produced the sequence of 18 path diagrams in Figures \ref{fig:AlgorithmResult1} and \ref{fig:AlgorithmResult2}. Notice, the green(thick) line has been added in each path diagram to make evident the height of the lowest positive row count. Of course each step ends with an updating of the row counts. The final path diagram yields a path that is easily shown to be the desired pre-image. To obtain this path, we simply start with the leftmost red arrow, and at each step we proceed along the arrow that starts at the rank reached by the previous arrow. Continue until all the arrows have been used. The reason why there always is a unique arrow that starts at each reached rank, is an immediate consequence of the $0$-row-count property of the final path diagram. The increasing property of the ranks of the starting points of our successive arrows, is now seen to imply that the initial path in Figure \ref{fig:InitialRank} is the sweep map image of this final path. This manner of drawing rational Dyck paths makes many needed properties more evident than the traditional manner and therefore also easier to prove. As a case in point, we give a simple visual way of establishing the following nontrivial result (see, e.g., \cite{sweepmap}). \begin{lem}\label{l-sweep-image-Dyck} The sweep image of an $(m,n)$-Dyck path is an $(m,n)$-Dyck path. \end{lem} \begin{proof} \begin{figure}[!ht] \centering{ \mbox{\includegraphics[height=2.8 in]{TriplePic.pdf}} \caption{A $(7,5)$-Dyck path on the left; by horizontal shifts of the arrows we obtain the middle path diagram whose starting ranks are increasing; then by vertical shifts of the arrows to obtain a $(7,5)$-Dyck path.} \label{fig:SweepMapDecompose} } \end{figure} On the left of Figure \ref{fig:SweepMapDecompose} we have the final path yielded by our algorithm. To obtain the path diagram in the middle we simply rearrange the arrows (by horizontal shifts) so that their starting ranks are increasing. The path on the right is obtained by vertically shifting the successive arrows so that they concatenate to a path. To prove that the resulting path is a $(7,5)$-Dyck path, it follows, by the co-primality of the pair $(7,5)$, that we need only show that the successive partial sums of the number of segments of these arrows are all non-negative. This is a consequence of the $0$-row-count property. In fact, for example, let us prove that the sum of the segments to the left of the vertical green line $v$ is positive. To this end, let $A$ be the arrow that starts on $v$ and $\ell$ be its starting rank. Let $h$ be the horizontal green (thick) line at level $\ell$. Denote by $ L$ the region below $h$, and let $L_1$, $L_2$ be the left and right portions of $L$ split by $v$. Let us also denote by $|rL_1|$, $|rL_2|$, the red arrow segment counts in the corresponding regions and by $|bL_1|$, $|bL_2|$ the corresponding blue segment counts. This given, since red segments contribute a $1$ and a blue segment contributes $-1$ to the final count, it follows that $$ i)\,\,\,\, |rL_1|+|rL_2| = |bL_1|+|bL_2|, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, ii)\,\,\,\, |rL_2|= 0. $$ In fact, $i)$ is due to the $0$-row-count property and $ii)$ is simply due to the fact that all red arrows to the right of $v$ must start above $h$. Thus we must have $$ |rL_1| - |bL_1| =|bL_2| \ge 0. $$ This implies that the sum of the arrows to the left of $v$ must be $\ge 0$. \end{proof} A proof of the validity of our algorithm may be derived from the Thomas-Williams result by letting their modulus tend to infinity. However, our algorithm deserves a more direct and simple proof. Such a proof will be given in the following pages. This proof will be based on the validity of a simpler but less efficient algorithm. To distinguish the above algorithm from our later one. We will call them respectively the \texttt{StrongFindRank} and the \texttt{WeakFindRank} algorithms, or ``strong" and ``weak" algorithm for short. The rest of the paper is organized as follows. Section 2 devotes to the proof of the \texttt{WeakFindRank} algorithms. It also includes all the necessary concepts and concludes with Theorem \ref{t-important}, which asserts the invertibility of the rational sweep map. Theorem \ref{t-tight} is the main result of Section 3. It allows us to analyze the complexity of both the ``strong" and the ``weak" algorithms. It is also used in Section 4, where we show the validity of the ``strong" algorithm. Finally, we discuss the difference between the Thomas-Williams algorithm and our algorithm in Section 5. We also talk about some future plans. \section{The Proof} \subsection{Balanced path diagrams} A path diagram $T$ consists of an ordered set of $n$ red arrows and $m$ blue arrows, placed on a $(m+n)\times N$ lattice rectangle. Where $N$ is a large positive integer to be specified. See Figure \ref{fig:InitialRankWeak}. \addpic[.30\textwidth]{SAMPLEPIC.pdf}{ A path diagram for $m=5,n=7$ and $N=21$. \label{fig:InitialRankWeak}} A red arrow is the up vector $(1,m)$ and a blue arrow is the down vector $(1,-n)$. The rows of lattice cells will be simply referred to as \emph{rows} and the horizontal lattice lines will be simply referred to as \emph{lines}. On the left of each line we have placed its $y$ coordinate which we will simply refer to as its \emph{level}. The level of the starting point of an arrow is called its \emph{starting rank}, and similarly its \emph{end rank} is the level of its end point. It will be convenient to call \emph{row $i$} the row of lattice cells delimited by the lines at levels $i$ and $i+1$. Given a word $\Sigma$ with $n$ letters $S$ and $m$ letters $W$, and a sequence of $n+m$ nonnegative ranks $R=(r_1,\dots, r_{m+n})$, the path diagram $T(\Sigma,R)$ is obtained by placing the letters of $\Sigma$ at the bottom of the lattice columns and drawing in the $i^{th}$ column an arrow with starting rank $r_i$ and $red$ (solid) if the $i^{th}$ letter of $\Sigma$ is $\Sigma_i=S$ or $blue$ (dashed) if $\Sigma_i=W$. See Figure \ref{fig:InitialRankWeak}, where $\Sigma=SSSWWWWSSWWW$ and $R=(0,1,2,5,6,7,8,9,10,11,12,13)$. Notice that each lattice cell may contain a segment of a red arrow or a segment of a blue arrow or no segment at all. The red segment count of row $j$ will be denoted $c^r(j)$ and the blue segment count is denoted $c^b(j)$. We will set $c(j)=c^r(j)-c^b(j)$ and refer to it as the \emph{count of row $j$}. In the above display on the right of each row we have attached its row count. The following observation will be crucial in our development. \begin{lem}\label{l-DifferenceLevelCount} Let $T(\Sigma,R)$ be any path diagram. It holds for every integer $j\ge1$ that \begin{align} c(j)-c(j-1)= \# \{\text{arrows starting at level } j\} -\#\{\text{arrows ending at level } j\}. \end{align} \end{lem} \vspace{-4mm} \addpic[.48\textwidth]{FourCase.pdf}{The difference $c(j)-c(j-1)$ is $1$ in the left two cases, is $-1$ in the right two cases, and is $0$ in the previous cases. \label{fig:FourCases}} \begin{proof} Let us investigate the contribution to the difference $c(j)-c(j-1)$ from a single arrow $A$. The contribution is $0$ if i) $A$ has no segments in rows $j$ and $j-1$, ii) $A$ has segments in row $j$ and $j-1$. In both cases, it is clear that $A$ cannot start nor end at level $j$. Thus the remaining cases are as listed in Figure \ref{fig:FourCases}. \end{proof} It will be convenient to say that a path diagram $T(\Sigma,R)$ is \emph{balanced} if all its row counts are equal to $0$. The word $\Sigma$ is said to be the \emph{$(S,W)$-word} of a Dyck path $D$ in $\cal D_{m,n}$, if it is obtained by placing an $S$ when $D$ takes a South end (hence a North step) and a $W$ when $D$ takes a West end (hence an East step). \begin{theo}\label{t-characterization} Let $\Sigma$ be the $(S,W)$-word of $D\in \mathcal{ D}_{m,n}$, and let $R=(r_1,r_2,\ldots ,r_{m+n})$, with $r_1=0$, be a weakly increasing sequence of integers. Then $R$ is a rearrangement of the rank sequence of a pre-image $\overline{D}$ of $D$ under the sweep map, if and only if the path diagram $T(\Sigma,R)$ is balanced and the rank sequence $R$ is strictly increasing. \end{theo} \begin{proof} Suppose that $\overline{D}$ is a pre-image of $D$. This given, let $T(\overline{\Sigma},\overline{R})$ with $\overline{\Sigma}$ the $(S,W)$-word of $\overline{D}$, $\overline{R}$ the rank sequence of $\overline{D}$, and height $N$ chosen to be a number greater than $nm+\max(\overline R)$. It is clear that the arrows of $T(\overline{\Sigma},\overline{R})$ can be depicted by starting at level $0$ and drawing a red arrow $(1,m)$ every time $\overline{D}$ takes a South end and a blue arrow $(1,-n)$ every time $\overline{D}$ takes a West end, with each arrow starting where the previous arrow ended. It is obvious that the row counts of $T(\overline{\Sigma},\overline{R})$ are all $0$ since, in each row every red segment is followed by a blue segment. Now let $T(\Sigma,R)$ be the path diagram of same height $N$ with $\Sigma$ the $(S,W)$-word of $D$ and $R$ its rank sequence. By definition of the Sweep map, the rank sequence $R$ is obtained by permuting in increasing order the components of $\overline{R} $. Since the co-primality of $(m,n)$ assures that $\overline R$ has distinct components, $R$ is necessarily a nonnegative increasing sequence. Likewise, the word $\Sigma$ is obtained by rearranging the letters of $\overline {\Sigma}$ by the same permutation. Thus we may say that the same permutation can be used to change $T(\overline{\Sigma},\overline{R})$ into $T( \Sigma , R )$. Since this operation only permutes segments within each row, it follows that all the row counts of $T( \Sigma , R )$ must also be $0$. This proves the necessity. For the sufficiency, suppose that the path diagram $T(\Sigma,R)$ is balanced, with $D$ the Dyck path whose word is $\Sigma$ and $R$ a weakly increasing sequence. Then by lemma 1 it follows that for every level $j$, either i) no arrow starts or ends at this level, or ii) if $k>0$ arrows end (start) at this level then exactly $k$ arrows start (end) at this level. This given, we will construct a Dyck path $\overline{D}$ by the following algorithm. Starting at level $0$ we follow the first arrow, which we know is necessarily red and starts at level $0$. This arrow ends at level $m$. Since there is at least one arrow that starts at this level follow the very next arrow that does. Proceeding recursively thereafter, every time we reach a level, we follow the very next arrow that starts at that level. This process stops when we are back at level $0$, and we must since in $\Sigma$ there are $n$ $S$ and $m$ $W$. Let $\overline D$ be the resulting path. Using the colors of the successive arrows of $\overline D$ gives us the $\overline{\Sigma}$ word of $\overline{D}$. Now notice that $\overline D$ must be a path in ${\cal D}_{m,n}$ since all its starting ranks are nonnegative due to the weakly increasing property of $R$ and therefore they must necessarily be distinct by the co-primality of $(m,n)$. In particular, if $\overline R$ denotes the sequence of starting ranks of $\overline D$ we are also forced to conclude that its components are distinct. Since the components of $\overline R$ are only a rearrangement of the components of $R$ we deduce that $R$ must have been strictly increasing to start with. This implies that $D$ must be a Sweep map image of $\overline D$ since the successive letters of $\Sigma$ can be obtained by rearranging the letters of $\overline {\Sigma}$ by the same permutation that rearranges $\overline R$ to $R$. This completes the proof of sufficiency. \end{proof} \def \ovvR {\overline R} \def \ovvr {\overline r} This given, we can easily see that the validity of our ``strong" algorithm hinges on establishing that it produces a balanced path diagram after a finite number of steps. Theorem 2 allows us to relax the strictly increasing requirements on the rank sequences of the successive path diagrams produced by the algorithm. The \texttt{WeakFindRank} algorithm, defined below, has precisely that property. This results in a simpler proof of the termination property of both algorithms. \subsection{Algorithm \texttt{WeakFindRank} and the Justification}\ \noindent Algorithm \texttt{WeakFindRank} \noindent \textbf{Input:} A path diagram $T(\Sigma,R^{(0)})$ with $\Sigma$ the word of a Dyck path $D\in \cal D_{m,n}$, a weakly increasing rank sequence $R^{(0)} =(r_1^{(0)},r_2^{(0)},\ldots ,r_{m+n}^{(0)})$. \noindent \textbf{Output:} A balanced path diagram $T(\Sigma,\ovvR)$. It will be convenient to keep the common height equal to $N$ for all the successive path diagrams constructed by the algorithm, where $N=U+2mn$, with $U=\max(R^{(0)})+m+1$. \begin{enumerate} \item[Step 1] { Starting with $T(\Sigma,R^{(0)})$ repeat the following step until the resulting path diagram is balanced. } \vskip .1in \item[Step 2] { In $T(\Sigma,R^{(s)})$, with $R^{(s)} =(r_1^{(s)},r_2^{(s)},\ldots ,r_{m+n}^{(s)}),$ find the lowest row $j$ with $c(j)>0$ and find the rightmost arrow that starts at level $j$. Suppose that arrow starts at $(i,j)$. Move up the arrow one level to construct the path diagram $T(\Sigma,R^{(s+1)})$ with $ r_{i}^{(s+1)}= r_{i}^{(s)}+1$ and $ r_{i'}^{(s+1)}= r_{i'}^{(s)}$ for all $i'\neq i$. If all the row counts are $\le 0$ then stop the algorithm, since all row counts must necessarily vanish.} \end{enumerate} \addpic[.45\textwidth]{ShiftArrow.pdf}{Shifting up one unit an arrow from level $a$ to level $b$ will decrease $c(a)$ by 1 and increase $c(b)$ by 1. \label{fig:ShiftArrow}} Figure \ref{fig:ShiftArrow} shows that we are weakly reducing the number of rows with positive row counts in Step 2. It also makes the following key observation evident. \begin{lem}\label{l-keep-nonnegative} If at some point $c(k)$ becomes $\ge 0$ then for ever after it will never become negative. In particular, since $c(k)= 0$ with $k>U$ for the initial path diagram $T(\Sigma,R^{(0)})$ we will have $c(k)\ge 0$ when $k>U$ for all successive path diagrams produced by the algorithm. \end{lem} \begin{proof} The lemma holds true because we only decrease a row count when it is positive. \end{proof} We need some basic properties to justify the algorithm. \begin{lem}\label{l-basic-property} We have the following basic properties. \begin{enumerate} \setlength\itemsep{2mm} \item[$(i)$] { If row $j$ is the lowest with $c(j)>0$ then there is an arrow that starts at level $j$. In this situation, we say that we are \emph{working} with row $j$.} \item[$(ii)$] { The successive rank sequences are always weakly increasing.} \item[$(iii)$] { If $T(\Sigma,R)$ has no positive row counts, then it is balanced. Consequently, if the algorithm terminates, the last path diagram is balanced.} \end{enumerate} \end{lem} \begin{proof}\ \begin{enumerate}\setlength\itemsep{2mm} \item[$(i)$] By the choice of $j$, we have $c(j)>0$ and $c(j-1)\le 0$. Thus $c(j)-c(j-1)>0$, which by Lemma \ref{l-DifferenceLevelCount}, shows that there is at least one arrow starting at rank $j$. \item[$(ii)$] Our choice of $i$ in step (2) assures that the next rank sequence remains weakly increasing. \item[$(iii)$] Since each of our path diagrams $T(\Sigma,R)$ has $n$ red arrows of length $m$ and $m$ blue arrows of length $n$, the total sum of row counts of any $T(\Sigma,R)$ has to be $0$. Thus if $T(\Sigma,R)$ has no positive row counts, then it must have no negative row counts either, and is hence balanced. \end{enumerate} \vspace{-8mm} \end{proof} \begin{proof}[Justification of Algorithm \texttt{WeakFindRank}] By Lemma \ref{l-basic-property}, we only need to show that the algorithm terminates. To prove this we need the following auxiliary result. \begin{lem}\label{l-redcount} Suppose we are working with row $k$, that is $c(k)>0$ and $c(i)\le 0$ for all $i<k$. If row $\ell $ has no segments for some $\ell <k$, then the current path diagram $T(\Sigma,R')$ has no segments below row $\ell$. \end{lem} \begin{proof}\hspace{-2cm}\vspace{-1cm}\addpic[.38\textwidth]{EmptyRow.pdf}{When row $\ell$ has no segments, there will be no segments in regions $B$ and $C$.\label{fig:emptyrow}} \vspace{.1cm} \hspace{5mm} Suppose to the contrary that $T(\Sigma,R')$ has a segment below row $\ell$, then let $V$ be the right most arrow that contains such a segment and say that it starts at column $i$. Since row $\ell$ has no segments, the starting rank of $V$ must be $\le \ell$. This implies that $i+1<m+n$ since the arrow that starts at level $k$ must be to the right of $V$ (by the increasing property of $R'$). This given, the current path diagram could look like in Figure \ref{fig:emptyrow}, where the two green (thick) lines divide the plane into 4 regions, as labelled in the display. The weakly increasing property of $R'$ forces no starting ranks in $C$, therefore there are no segments there. By the choice of $V$ there cannot be any segments in $B$. Thus the (gray) empty row $\ell$ forces no segments within both $B$ and $C$. Now notice that since $\Sigma$ is the word of a path $D\in \mathcal D_{m,n}$ the number of red segments to the left of column $i+1$ minus the number of blue segments to the left of that column must result in a number $s>0$. However, since $c(j)\le 0$ for all $j\le \ell$ it follows that $c(0)+\cdots +c(\ell-1)\le 0$. But since regions $B$ and $C$ have no segments it also follows that $s=c(0)+\cdots +c(\ell-1)\le 0$, a contradiction. \end{proof} \vskip -.1in Next observe that since each step of the algorithm increases one of the ranks by one unit, after $M$ steps we will have $|R^{(M)}-R^{(0)}|=\sum_{i=1}^{m+n}r_i^{(M)}-\sum_{i=1}^{m+n}r_i^{(0)} =M$. This given, if the algorithm iterates Step 2 forever, then the maximum rank will eventually exceed any given integer. In particular, we will end up working with row $k$ with $k$ so large that $k-U$ exceeds the total number $mn$ of red segments. At that point we will have $c(k)>0$ and $c(j)=0$ for all the $k-U$ values $j=U,U+1,\dots, k-1$. The reason for this is that we must have $c(j)\le 0$ for all $0\le j < k$ and by Lemma \ref{l-keep-nonnegative} we must also have $c(j)\ge 0$ for $j\ge U$. Now, by the pigeon hole principle, there must also be some $U\le \ell <k$ for which $c^r(\ell )=0$. But then it follows that $c^b(\ell)=c^r(\ell)-c(\ell)=0$, too. That means that row $\ell$ contains no segments. Then Lemma \ref{l-redcount} yields that there cannot be any segments below row $\ell$ either. This implies that the total row count is $ \sum_{j\ge 0} c(j) = \sum_{j\ge U} c(j) \ge c(k)>0 $, a contradiction. \end{proof} Thus the \texttt{WeakFindRank} algorithm terminates and we can draw the following important conclusion. \begin{theo}\label{t-onto} Given any $(m,n)$-Dyck path $D$ with $(S,W)$-word $\Sigma$ and any initial weakly increasing rank sequence $R^{(0)}=(r_1^{(0)},r_2^{(0)},\cdots ,r_{m+n}^{(0)})$, let $T(\Sigma,\oR)$ be the balanced path diagram produced by the \texttt{WeakFindRank} algorithm. Then the rank sequence \\ $\oR=(\orr_1,\orr_2,\cdots ,\orr_{m+n})$ will be strictly increasing. Moreover, the sequence $$ \widetilde R= (0,\orr_2-\orr_1,\orr_3-\orr_1,\ldots,\orr_{m+n}-\orr_1) $$ is none other than the increasing rearrangement of the rank sequence of a pre-image $\overline D$ of $D$ under the sweep map. \end{theo} \vskip -.5in \begin{proof} Clearly, the path diagram $T(\Sigma,\widetilde R)$ of height $N={\widetilde r}_{m+n}+m+1$ will also be balanced. Thus, by Theorem 2, $\widetilde R$ must be the increasing rearrangement of the rank sequence of a pre-image $\overline{D}$ of $D$. In particular not only $\widetilde R$ but also $\oR$ itself must be strictly increasing. \end{proof} This result has the following important corollary. \begin{theo}\label{t-important} For any co-prime pair $(m,n)$ the rational $(m,n)$-sweep map is invertible. \end{theo} \begin{proof} Lemma \ref{l-sweep-image-Dyck} shows that the rational $(m,n)$-sweep map is into. Theorem \ref{t-onto} gives a proof (independent of the Thomas-Wiliams proof) that it is onto. Since the collection of $(m,n)$-Dyck paths is finite, the sweep map must be bijective. \end{proof} Figure \ref{fig:fourpics} depicts the entire history of the \texttt{WeakFindRank} algorithm applied to a $(7,5)$-Dyck path $D$. Both $D$ (on the left) and its pre-image $\overline D$ (on the right) are depicted below. \begin{figure}[!ht] \centering{ \mbox{\includegraphics[height=5.5in]{fourpics.pdf}} } \caption{The top left picture is the balanced path diagram produced by the \texttt{WeakFindRank} algorithm. The top right picture is obtained by Theorem \ref{t-characterization}. \label{fig:fourpics} } \end{figure} \vskip -.01in The initial rank sequence that was used here is $R^{(0)}=(0,0,5,5,5,5,5,5,5,5,5)$. A boxed lattice square in column $i$ with an integer $k$ inside indicates that arrow $A_i$ was processed at the $k^{th}$ step of the algorithm. As a result $A_i$ was lifted from the level of the bottom of the square to its top level. For instance the square with $63$ inside indicates that the red arrow $A_7$ was lifted at the $63^{rd}$ step of the algorithm from starting at level $9$ to starting at level $10$. We also see that the last time that the arrow $A_8$ reached its final starting level at step $87$. The successive final starting levels of arrows $A_1,A_2,\ldots ,A_{12}$ give the increasing rearrangement of the ranks of the path $\overline D$. Notice, arrows $A_1$ and $A_3$ where never lifted. \def \TR {\widetilde R} \def \tr {\widetilde r} \def \oR {\overline R} \def \orr {\overline r} \section{Tightness of Algorithm \texttt{WeakFindRank} } Following the notations in Theorem \ref{t-onto}, the number of steps needed for Algorithm \texttt{WeakFindRank} is $|\overline R|-|R^{(0)}| = |\TR|-|R^{(0)}| + (m+n) \bar r_1.$ We will show that a specific starting path diagram can be chosen so that $\bar r_1 =0$. For two rank sequences $R=(r_1,r_2,\ldots ,r_{n+m})$ and $R'=(r_1',r_2',\ldots ,r_{n+m}')$ let us write $R\preceq R'$ if and only if we have $r_i\le r_i'$ for all $1\le i\le m+n$, if $r_i< r_i'$ for at least one $i$ we will write $R\prec R'$. The \emph{distance} of $R$ from $R'$, will be expressed by the integer $$ | R'-R| \, =\, \sum_{i=1}^{m+n} (r_i'-r_i)=\, \sum_{i=1}^{m+n} r_i'- \sum_{i=1}^{m+n} r_i. $$ Given the $(S,W)$-word $\Sigma$ of an $(m,n)$-Dyck path $D$, we will call the initial starting sequence $R^{(0)}$ \emph{canonical for $\Sigma$} if it is obtained by replacing the first string of $S$ in $\Sigma$ by $0$'s and all the remaining letters by $m$, and call the balanced path diagram $T(\Sigma,\TR)$ yielded by Theorem \ref{t-onto} \emph{canonical for $\Sigma$}. Clearly $R^{(0)} \preceq \TR$. This given, we can prove the following remarkable result. \begin{theo}\label{t-tight} Let $\Sigma$ be the $(S,W)$-word of a Dyck path $D\in \mathcal{D}_{m,n}$. If $R^{(0)}$ and $T(\Sigma,\TR)$ are canonical for $\Sigma$ and $R$ is any increasing sequence which satisfies the inequalities $$ R^{(0)} \preceq R\preceq \TR $$ then the \texttt{WeakFindRank} algorithm with starting path diagram $T(\Sigma,R)$ will have as output the rank sequence $\TR$. \end{theo} \begin{proof} Notice if $|\TR-R|=0$ there is nothing to prove. Thus we will proceed by induction on the distance of $R$ from $\TR$. Now assume the theorem holds for $|\TR-R|=K$. We need to show that it also holds for $|\TR-R|=K+1$. Suppose one application of step (2) on $R$ gives $R'$. We need to show that $R'\preceq \TR $. This done since $R$ and $R'$ only differ from one unit we will have $|\TR-R'|=K$ and then the inductive hypothesis would complete the proof. Thus assume if possible that this step (2) cannot be carried out because it requires increasing by one unit an $r_i=\tr_i$ . Suppose further that under this step (2) the level $k$ was the lowest with $c(k)>0$ and thus the arrow $A_i$ was the right most that started at level $k$. In particular this means that $r_i=\tr_i=k$. Since $| \TR-R|\ge 1$, there is at least one $i'$ such that $r_{i'}<\tr_{i'}$. If $r_{i'}=k'$ let $i'$ be the right most with $r_{i'}=k'$. Define $R''$ to be the rank sequence obtained by replacing $r_{i'}$ by $r_{i'}+1$ in $R$. The row count $c(k')$ is decreased by $1$ and another row count is increased by $1$, so that $c(k)$ in $R''$ is still positive. Since $|\TR-R''|=K$ the induction hypothesis assures that the \texttt{WeakFindRank} algorithm will return $\TR$. But then in carrying this out, we have to work on row $k$, sooner or later, to decrease the positive row count $c(k)$. But there is no way the arrow $A_i$ can stop being the right most starting at level $k$, since arrows to the right of $A_i$ start at a higher level than $A_i$ and are only moving upwards. Thus the fact that the \texttt{WeakFindRank} algorithm outputs $\TR$ contradicts that fact that the application of step (2) to $R$ cannot be carried out. Thus we will be able to lift $A_i$ one level up as needed and obtain the sequence $R'$ obtained by replacing $r_i$ by $r_i+1$ in $R$. But now we will have $|\TR-R'|$ =K$ $ with $R'\prec \TR$ and the inductive hypothesis will assure us that the \texttt{WeakFindRank} algorithm starting from $R$ will return $\TR$ as asserted. \end{proof} \addpic[.295\linewidth]{RemoveASquare.pdf}{A \\ $(7,5)$-Dyck path with area $4$. Removing the black cell changes the rank $18$ to $18-7-5=6$. \label{fig:RemoveASquare} } It is clear now that the complexity of the \texttt{WeakFindRank} algorithm is $O(|\TR|)$. Recall that reordering $\TR$ gives the rank set $\{r_1,r_2,\dots, r_{m+n}\}$ of $\overline D$. It is known and easy to show that $$ \textrm{area} (\overline D)= \frac{1}{m+n}\left( \sum_{i=1}^{m+n} r_i -\binom{m+n}{2} \right),\qquad\qquad\qquad\qquad\qquad $$ where $\textrm{area} (\overline D)$ is the number of lattice cells between $\overline D$ and the diagonal. Indeed, from Figure \ref{fig:RemoveASquare}, it should be evident that reducing the area by $1$ corresponds to reducing the sum $r_1+\cdots +r_{m+n}$ by $m+n$. It follows that \begin{align} \label{e-TR-area} |\TR| = (m+n)\textrm{area} (\overline D)+\binom{m+n}{2} =O((m+n)\textrm{area} (\overline D)) .\qquad\qquad\qquad\qquad\qquad \end{align} Theorem \ref{t-tight} together with \eqref{e-TR-area} gives the following result. \begin{cor} Given any $(m,n)$-Dyck path $D$, its pre-image $\overline D$ can be produced in $O((m+n) \area( \overline D))$ running time. \end{cor} \begin{proof} Let $\Sigma$ be the $(S,W)$-word of $D$. We first construct the path diagram $T=(\Sigma, R^{(0)})$ with $R^{(0)}$ being canonical for $\Sigma$ and compute the row counts of the path diagram. Next we use the \texttt{WeakFindRank} algorithm to update $T$ until we get the balance path diagram $(\Sigma, \TR)$ by Theorem \ref{t-tight}. Finally we use Theorem \ref{t-onto} to find the pre-image $\overline D$. Iteration only appears in the middle part, where the \texttt{WeakFindRank} algorithm performs $|\TR|- |R^{(0)}|$ times of Step 2. In each Step 2, we search for the lowest positive row count $c(j)$, then search for the rightmost arrow $r_i$ that is equal to $j$, and finally update $r_i$ by $r_i+1$ and the row counts at \emph{only} two rows (see Figure \ref{fig:ShiftArrow}). Therefore, the total running time is $O( |\TR|) $, and the corollary follows by \eqref{e-TR-area}. \end{proof} \def \HR {\widehat R} \def \Hr {\widehat r} \def\stco{\mathop{\texttt{sc}}} \section{Validity of Algorithm \texttt{StrongFindRank} } Let $R=(r_1\le r_2\le \cdots \le r_l)$ be a sequence of nonnegative integers of length $l$. The \hbox{{\it strict cover $R'=\stco (R)=(r_1'<r_2'< \cdots < r_l')$ of $R$}} is recursively defined by $r_1'=r_1$ and $r_i'=\min(r_{i},r_{i-1}'+1)$ for $i\ge 2$. It is the unique minimal strictly increasing sequence satisfying $R'\succeq R$. The following principle is straightforward. \medskip \centerline{\emph{ If $R\prec \oR$ with $R$ weakly increasing and $\oR$ strictly increasing, then $\stco (R) \preceq\oR $}.} A direct consequence is that $\HR^{(0)} = \stco( R^{(0)}) \preceq \TR$ if $R^{(0)}$ is canonical for $\Sigma$. This sequence is exactly the starting rank sequence of our ``strong" algorithm (see Figure \ref{fig:InitialRank}). It will be good to review our definitions before we proceed. \noindent Algorithm \texttt{StrongFindRank} \noindent \textbf{Input:} A path diagram $T(\Sigma,\HR^{(0)})$ with $\Sigma$ the word of a Dyck path $D\in \cal D_{m,n}$, the nonnegative strictly increasing rank sequence $\HR^{(0)}$ as above. \noindent \textbf{Output:} The balanced path diagram $T(\Sigma, \HR)$. It will be convenient to keep the common height equal to $N$ for all the successive path diagrams constructed by the algorithm, where $N=U+2mn$, with $U=\max(R^{(0)})+m+1$. \begin{enumerate} \setlength\itemsep{2mm} \item[Step 1] {Starting with $T(\Sigma,\HR^{(0)})$ repeat the following step until the resulting path diagram is balanced. } \item[Step 2] {In $T(\Sigma,\HR^{(s)})$, with $\HR^{(s)} =(\Hr_1^{(s)},\Hr_2^{(s)},\ldots ,\Hr_{m+n}^{(s)})$ find the lowest row $j$ with $c(j)>0$ and find the unique arrow that starts at level $j$. Suppose that arrow starts at $(i,j)$. Define $R'$ to be the rank sequence obtained from $\HR^{(s)}$ by the \hbox{replacement} \hbox{$\Hr_i^{(s)}\longrightarrow\Hr_i^{(s)}+1$}, and set $\HR^{(s+1)}=\stco(R')$. Construct the path diagram \hbox{$T(\Sigma,\HR^{(s+1)})$} and update the row counts. If all the row counts of $T(\Sigma,\HR^{(s)})$ are $\le 0$ then stop the algorithm and return $\HR^{(s)} $}, since all these row counts must vanish. \end{enumerate} This given, the validity of the \texttt{StrongFindRank} algorithm is an immediate consequence of the following surprising result. \begin{theo}\label{t-surprise} Let $D\in \mathcal{D}_{m,n}$ with $(S,W)$-word $\Sigma$, and let the balanced path diagram $(\Sigma,\TR)$ be canonical for $\Sigma$. Then all the successive rank sequences $\HR^{(s)}$ produced by the \texttt{StrongFindRank} algorithm satisfy the inequality \begin{align} \label{e-1} \HR^{(s)}\preceq \TR \end{align} and since the successive rank sequences satisfy the inequalities \begin{align}\label{e-2} \HR^{(0)}\prec \HR^{(1)}\prec \HR^{(2)}\prec \HR^{(3)}\prec \cdots \prec \HR^{(s)}, \end{align} there will necessarily come a step when $T(\Sigma, \HR^{(s)})=T(\Sigma,\TR)$. At that time the algorithm will stop and output $\TR$. \end{theo} \begin{proof} The inequality \eqref{e-2} clearly holds since we always shift arrows upwards. We prove the inequality \eqref{e-1} by induction on $s$. The basic fact that will play a crucial role is that the output $\TR$ is strictly increasing. See Theorem \ref{t-characterization}. The case $s=0$ of \eqref{e-1} is obviously true since $\HR^{(0)}$ is the strict cover of $R^{(0)} \preceq \TR$. Assume $\HR^{(s)}\preceq \TR$ and we need to show \eqref{e-1} holds true for $s+1$. Now $\HR^{(s+1)}$ is the strict cover of $R'$, where $R'$ is the auxiliary rank sequence used by Step 2 of the \texttt{StrongFindRank} algorithm. Since $R'$ is precisely the successor of $\HR^{(s)}$ by Step 2 of the \texttt{WeakFindRank} algorithm, it will necessarily satisfy the inequality $R'\prec \TR$ by Theorem \ref{t-tight}. Our principle then guarantees that we will also have $$ \HR^{(s+1)}\prec \TR $$ unless $\HR^{(s)}= \TR$ and the \texttt{StrongFindRank} algorithm terminates. \end{proof} \begin{rem} This proof makes it evident that, to construct the pre-image of an $(m,n)$ Dyck path, the \texttt{StrongFindRank} algorithm will be more efficient than the \texttt{WeakFindRank} algorithm. This is partly due to the fact that the distances $|\HR^{(s+1)}-\HR^{(s)}|$ do turn out bigger than one unit most of the time, as we can see in the following display. \end{rem} \begin{figure}[!ht] \centering{ \mbox{\includegraphics[height=4in]{MAGO.pdf}} } \caption{Comparison of the WeakFindRank algorithm and the StrongFindRank algorithm by an example.\label{fig:MAGO}} \end{figure} In the middle of Figure \ref{fig:MAGO} we have a Dyck path $D$, and below it, its pre-image $\overline D$. To recover $\overline D$ from $D$ we applied to $D$ the \texttt{WeakFindRank} algorithm (on the left) and the \texttt{StrongFindRank} algorithm (on the right). The display shows that the ``weak" algorithm required about $3$ times more steps than the ``strong" algorithm. The numbers in the Cyan squares reveal that, in several steps, two or more arrows were lifted at the same time. On one step as many as $4$ arrows where lifted (step $13$). The other step saving feature of the ``strong" algorithm is due to starting from the strict cover of the canonical starting sequence. This is evidenced by the difference between the number of white cells below the colored ones on the left and on the right diagrams. \section{Discussion and Future Plans} This work is done after the authors read \cite{Nathan} version 1, especially after the first named author talked with Nathan Williams. The concept ``balanced path diagram" is a translation of ``equitable partition" in \cite{Nathan}. The intermediate object ``increasing balanced path diagram" is what we missed in our early attempts: The obvious $0$-row-count property of Dyck paths gives the necessary part of Theorem \ref{t-characterization}, but we never considered the $0$-row-count property to be sufficient until we read the paper \cite{Nathan}. Once Theorem \ref{t-characterization} is established, inverting the sweep map is reduced to searching for the corresponding increasing balanced path diagram. Our algorithm is similar to the Thomas-Williams algorithm in the sense that both algorithms proceed by picking an initial candidate and then repeat an identical updating process until terminates. In the rational Dyck path model, our updating process is natural and has more freedom than the Thomas-Williams algorithm. For instance, we can start with any weakly increasing rank sequence. In an upcoming paper, we will extend the arguments in this paper to a more general class of sweep maps. Such sweep maps have been defined in \cite{sweepmap}. Though the invertibility of these sweep maps can be deduced from Nathan's modular sweep map model \cite{Nathan}, they deserve direct proofs. Even the rational sweep map needs further studied. The $(m,n)$-rational sweep map on $\mathcal D_{m,n}$ is known to take the \texttt{dinv} statistic to the \texttt{area} statistic. This result is proved combinatorially by Gorsky and Mazin in \cite{Gorsky-Mazin}, but the proof is indirect. Our view of Dyck paths leads to visual description of the \texttt{dinv} statistics and a simple proof of the \texttt{dinv} and \texttt{area} result. See \cite{dinv-area} for detailed information and references. {\small \textbf{Acknowledgements:} The first named author is grateful to Nathan Williams for the time and effort that he spent to communicate his pioneering proof of invertibility of the general sweep map. The first named author was supported by NFS grant DMS13--62160. The second named author was supported by National Natural Science Foundation of China (11171231).
0.00637
12/15/2011 1 Comment I am also a lover of European music. When I heard the European music album once before, I really went to a new world of music and it delightedly made me to live in that composing. That much influencing composing that was. Thanks for the delightful blog! September 2013 August 2013 March 2013 August 2012 January 2012 December 2011 November 2011 August 2011 June 2011 RSS Feed
0.014803
Our aim at Teaching Displays is to take some of the pressure off busy teachers by providing colourful classroom prompts that help create a stimulating and interactive learning environment. These resources can be used either on their own or alongside children’s work. We cover a wide range of topics from both the National Curriculum and revised Primary Strategy. Display Sets – easy to use, just pop out the pieces and attach to your board to create an interesting and motivational display. Large Display Sets feature larger pieces and cover a greater area of display board. Display Sets are great value and will cover most classroom boards. Learning Charts – motivational prompts for use when teaching more specific topics, and have the added bonus of activity guides printed on the back Letters – save time cutting and sticking, just pop out the individual letters and use again and again for personalising any classroom display Borders – our themed borders are an essential part of any display. Cut-outs - handy packs of themed based die-cut cards, great for adding substance and style to your display. Cut-outs come in three sizes, mini, regular and jumbo. Banners - up to 3m long, banners are great for getting important messages across to your learners. Classroom Essentials - no classroom is complete without a selection from our classroom essentials range, including presentation boards and our innovative classroom clothesline to create extra display space or for portable displays. Learning Walls - make an impact with one of our learning walls or giant maps, huge printed corrugated boards which can be coloured in. Topic Packs - great value bundles of our popular products.
0.873166
Scarlet Pimpernel, The (1934) Carlton Home Entertainment Sleeve Sypnosis: "Elegent, poised and witty." DAILY TELEGRAPH 'One of the most romantic and durable of all swashbucklers.' NEW YORKER THE SCARLET PIMPERNEL is one of the world cinema's most entertaining historical adventure romances and is based on the much-loved novel by Baroness Orczy. Starring LESLIE HOWARD as Sir Percy Blakeney/alias The Scarlet Pimpernel, the action takes place against the dramatic backdrop of the French Revolution. As Sir Blakeney, he takes on the role of the most ineffectual society fop to hide his true identity as the dashing adventurer who saves French aristocrats from the death by the guillotine. RAYMOND MASSEY stars as Chauvelin, the French Republic's cunning and dangerous Ambassador to England who is bent on unmasking Sir Blakeney. Unfortunately, Blakeney's beautiful French-born wife, Marguerite (MERLE OBERON) is not aware of his secert and despises him for his languid airs and self-indulgences and when Chauvelin captures her brother, Marguerite is forced to betray the Pimpernel. In the meantime, Sir Blakeney has left for France on another rescue mission. Marguerite learns of her husband's true identity and follows him to France to warn him. Can she reach him in time to save his life? Director: Harold Young Cast: Leslie Howard, Joan Gardner, Merle Oberon, Raymond Massey, Nigel Bruce, Bramwell Fletcher, Anthony Bushell, Walter Rilla Genre: Drama, Action and Adventure Addtional Notes: Official Run Time: 93 mins. Scarlet Pimpernel, The (1934) 3007401103 Carlton Small Box - Retail Tape Or find "Scarlet Pimpernel, The (1934)" on VHS and DVD / BluRay at Amazon. Addtional Images: Thank you contributors silveroldies
0.001343
masp: Top Left – Shell Pink Sequin Snowflake Christmas Sweater £17.99 Top Middle Left – Knitted Jumper £14.99 Top Middle Right – Knitted Jumper £14.99 Top Right – Cream Reindeer Fluffy Christmas Jumper £24.99 Bottom Left – Mint Green Fluffy Fairisle Christmas Jumper £27.99 Bottom Middle Left – Heidi Ho Ho Homies Jumper £18 Bottom Middle Right – Petra Polar Bear Christmas Jumper £15 Bottom Right – Cream Seasons Tweetings Robin Christmas Jumper £14.99: Top Left – Blue Pug Christmas Sweater £14.99 Top Middle – Light Grey Marl Christmas Bauble Micky Printed Sweatshirt £26 Top Right – Black Bah Humbug Donald Duck Christmas Sweater £19.99 Bottom Left – Navy Santa Glasses Christmas Jumper £14.99 Bottom Middle – Cream Reindeer Jumper £38 Bottom Right – Navy Snow Scene Knitted Jumper £19.99
0.001168
Forgoing a gown, Rosamund Pike attended the 2014 Met Gala on Monday (May 5) in New York City. Also somewhat forgoing the theme, like so many others, the British actress wore a Louis Vuitton Fall 2014 croc zipper vest paired with an embroidered spiral skirt. Styling the look with Louis Vuitton ‘New Day’ pumps, a gold LV trunk necklace and Jennifer Fisher rings, this look is quite a side step for the Erdem-loving actress, who doesn’t normally toe an edgy line. Did she pull the look off for you? Credit: Style.com & Getty
0.013346
Posted in Mississippi March 15, 2016 13 Perfect Places To Go In Mississippi If You’re Feeling Adventurous From state parks to hidden locales, these 13 places in Mississippi are perfect for all those adventure seekers out there. The longest of the state’s Rails to Trails’ conversions, the Tanglefoot Trail spans 43.6 miles, and takes adventure seekers on a history-filled journey through fields, forests, meadows, and wetlands, navigating the same paths as the Chickasaws and Meriwether Lewis. Located in the foothills of the Appalachian Mountains, the award-winning Tishomingo State Park is a must for outdoor enthusiasts. The park has landed spots on several national “top” lists, including the “Top 25 Canoeing Spots,” “Top 50 Hiking Trails,” and “Top 50 Fishing Spots.” Additionally, the park’s location also makes it great for rock climbing. The rocky outcropping and bluffs found in Tishomingo don’t exist anywhere else in the state. The only one of its kind in this part of the country, Flora’s Petrified Forest has been 36 million years in the making, and offers visitors the chance to view fossils that exhibit perfectly preserved details. Exploring this natural wonder is a breeze thanks to a self-guided nature trail, which includes many points of interest and informative markers. Believed to have been occupied by Native Americans several thousand years ago, the area that is now Sky Lake is actually an abandoned channel of the Mississippi River. The area is home to several ancient bald cypress trees in varying sizes, with the biggest measuring 47’ in circumference and 70’ in height – one of the tallest in the state. Visitors can take in all Sky Lake has to offer via a 1700’ boardwalk or a 2.6-mile paddling trail, both of which navigate the ancient forest. This 450-mile foot trail, known as the "Old Trace,". The IMMS’ Dolphin Interaction Program is exactly the kind of thing adventure seekers’ dreams are made of. Those taking part in the hour-long program will have the opportunity to feed, touch, interact, and, of course, swim with a dolphin. The one-of-a-kind experience will surely be remembered forever. For more information, click here. At over 24,000 acres, this refuge provides plenty of room for fishing, hunting, photography, hiking, and bird watching. The refuge’s diverse habitat, which consists of bottomland hardwoods, cypress swamps, upland hardwoods on the loess bluffs, and small cliffs, is home to a variety of wildlife, making for some of the best views in the state. The DeSoto County Greenways program is made up of trails, which are perfect for everything from walking and hiking to horseback riding and biking. Aside from traditional trails, the area is filled with water trails that are ideal for canoeing and kayaking. Visit the area in December seasons. Click here for more information. Known by locals as the “Rez,” the Ross Barnett Reservoir was completed in 1965. The 105-mile shoreline provides exceptional views, especially since the reservoir is bounded on the north by the Natchez Trace. Attracting thousands of visitors annually, the 33,000-acre lake and surrounding area are ideal for an array of outdoor activities, including boating, sailing, water sports, camping, fishing, picnicking, and bird watching. About as hidden as you can get, the thoroughly explore the hardwood bottomland canyon, which is home to a number of rare plants and animals. To learn more, visit the Nature Conservancy.. Learn more by visiting Seminary Canoe Rental. The Clark Creek Nature Area encompasses 700 acres and includes about 50 waterfalls, which range in size from 10’ to more than 30’ in height. Some minimal hiking is required to access the waterfalls but is fairly easy since there are both primitive and improved trails. The nearby Tunica Hills Campground offers RV hook-ups, cabin rentals, and primitive camping, making an overnight stay totally doable. What are some of your favorite places to visit in the state when you’re feeling adventurous?
0.942899
Most of the holidays are family-based no matter what you religion you are, and Thanksgiving is the ultimate family holiday of all. But there is one holiday that screams party: New Year’s Eve! Of course, kicking off the New Year with your children is incredibly sweet and sentimental as they matter so much to you, but when was the last time you and your husband or partner stepped out to celebrate in style, alone? Probably not in ages and for some of you, maybe even a decade! Yet all December long, moms and dads spend money and time putting together the perfect holidays for their children, so why shouldn’t a couple choose to do something nice for each other by taking the night off from parenting to spend quality time letting loose with each other? Read More: Why You and Your Partner Deserve a Kids-Free New Year’s Eve Do It!
0.905943
Proposed building moratorium on parts of Highland Road concerns Growth Coalition The Baton Rouge Growth Coalition expressed concern today about a proposal by Metro Councilman Chandler Loupe to impose a 90-day building moratorium along a portion of Highland Road. Loupe wants the moratorium to take effect along the busy roadway between between Siegen Lane and Perkins Road while the a Planning Commission subcommittee completes a review of rural zoning codes and ordinances. The item is set to come before the council on Wednesday. In a letter, Growth Coalition Executive Director Larry Bankston says while the organization understands the need for planning and zoning studies, they can be conducted without a moratorium. “Our organization does not take positions on any specific projects before the Planning Commission or the council, but the proposed moratorium by Councilman Loupe will do great harm to projects already underway and create an impress that Baton Rouge is antidevelopment,” Executive Director Larry Bankston writes. As written, Bankston says, the moratorium leaves the city-parish open to legal challenges because of its wording. “For example, the proposed draft of the moratorium does not address what a ‘new development application’ is or what it includes,” reads the letter from Bankston, which goes on to suggest it should not include building permits on any properties, or applications that are based on previously submitted plans or a final subdivision plat for approval. Attempts to reach Loupe for comment as of this afternoon’s deadline were unsuccessful. The moratorium is the latest step in the ongoing debate about development along parts of Highland Road. Several residents of Highland and Pecue Lane as the council last year to change the zoning of hundreds of properties in a three-mile radius from rural to REA1. Rural zoning allows for about seven lots per acre, while REA1 allows only one lot per acre. Residents in favor of the change have said they are trying to ensure development in the area is more controlled because the roads in that area are already overburdened and residents have concerns about flooding if more developments come to their little stretch of town, which is one of the few places left in Baton Rouge to build. Councilman Buddy Amoroso has asked the Planning Commission to conduct a study of rural zoning codes, and Loupe wants the moratorium in place while that study is being conducted. The Metro Council meets at 4 p.m. Wednesday on the ninth floor of City Hall, 222 St. Louis St. See the full agenda. —Ryan Broussard There are no comments. Click to add your thoughts!
0.934151
- Pittsburgh Today - Money - Nonprofits - People - Home/Lifestyle - Culture - Health - Sports/Outdoors - Pittsburgh Tomorrow I was born in Pittsburgh on May 6, 1989. My grandma still lives here. When I was young, we moved around a lot because my dad, Craig Heyward, played 11 years in the NFL for five different teams. When I was 6, we moved to Atlanta so that my dad could play for the Falcons, … Cameron Heyward, Gridiron Philanthropist Read More » Karla Boos: “All the World’s a Stage” Read More » Randy Gilson, Genius of the Human Spirit Read More » Farnam Jahanian, President of Carnegie Mellon University Read More » John Kasich, Government Leader Read More » Judith Hansen O’Toole, Art Lover Read More » John Fetterman, Public Servant Read More » Helen Hanna Casey, Real Estate Maven Read More » Ahmad Jamal, Jazz Master Read More » Christina Cassotis, Allegheny County Airport Authority Read More » Rich Fitzgerald, Allegheny County Executive Read More » Jim Withers, M.D., Street Doctor Read More » Clint Hurdle, Baseball Impresario Read More » Esther L. Barazzone, Educator & Administrator Read More » J. Kevin McMahon, Arts & Culture Executive Read More » Ted Pappas, Impresario Read More » Jared L. Cohon, Academician »
0.084992
Date/Time Date(s) - 08/06/2019 6:30 pm - 9:00 pm Location Vankleek Hill Baptist Church Categories No Categories Come join us for Youth Group at the church – Every other Saturday at 6:30pm We’re planning lots of fun activities and a road trip to a Youth Weekend on the May Long Weekend to Pitch and Praise. On Saturdays, we do a study of God’s word, sing songs, play games and have a great time together. Will you join us? For information please email will.monterroza@gmail.com or just come join us on a Saturday night.
0.008519
TITLE: A question about groups of intermediate growth, II QUESTION [3 upvotes]: This question arose in the comments of A question about groups of intermediate growth. I think it might be interesting to put it more in evidence. Let $G$ be a f.g. group with a fixed symmetric set of generators $S$ and denote by $B(n)$ the ball of radius $n$ about the identity w.r.t. the word metric induced by $S$. Fix an integer $k\geq1$ and define $\overline\zeta_k(G)=\lim\sup_n\frac{|B(nk+k)|}{|B(nk)|}$. Observe that If $G$ has polynomial growth, then $\overline\zeta_k(G)=1$, for all $k$. If $\overline\zeta_k(G)=1$ for all $k$, then $G$ has sub-exponential growth. General question: What can we say about $\overline\zeta_k(G)$ if $G$ has intermediate growth? Martin Kassabov, in the comment to my question, suspects that it should be always (or most of the times) equals to $1$, but I cannot find even a single examples of a group of intermediate growth for which it is equal to $1$. I have to say that my knowledge about groups of intermediate growth is very little and I just tried to apply Corollary 1.3 in http://arxiv.org/PS_cache/arxiv/pdf/1108/1108.0262v1.pdf, but, as already observed by Martin, it is not strong enough to give an example of groups of intermediate growth whose $\overline\zeta_k(G)=1$. Particular question: Is there an example of group of intermediate growth for which $\overline\zeta_k(X)=1$, for all $k$? Thanks in advance, Valerio REPLY [6 votes]: It seems that if $\bar \zeta_k > 1$ for some $k$ then $\bar \zeta_k > 1$ for all $k$. Also it is clear $\lim_k \sqrt[k]{\bar \zeta_k} = 1$. My guess is that most known groups of intermediate growth satisfy $\bar \zeta_k=1$, but proving this require very carefull estimates for the size of the balls. Notice that until recently the growth type on any group of initermediate growth have not be computed, which requires just a "crude" estimates of the growth type. You can see the recent papers of Bartholdi and Eschler: https://arxiv.org/abs/1110.3650 and https://arxiv.org/abs/1011.5266, where they have computed the growth type of many groups, but I fell that their estimates are far from what you need to get $\bar\zeta_k=1$.
0.013115
With... - ...Wildlife lovers... don't miss our boardwalks, trails, swamps... alligators, wading birds, manatees, florida panther, key deer... Where to start? Our fantastic collection of National and State Parks of course! - ...Design and Architechture lovers... go beyond the Art Deco District and check out some up and coming neighborhoods, Wynwwod, the Design District... Here are some highlights of Miami's art, design and architectural scene... - ...Foodies... check out the amazing cuisine variations in the area, food tours and walks are a fantastic way to see and taste the city! So read on... Here are some ideas on what to do, how, when, where, how much... to get you started! Swim with Dolphins _1<<. Boat Along Wonderful Biscayne Bay . Take An Airboat Ride Through The Everglades . Riding Segways in Miami >>IMAGE. Flyboarding in Miami . Scuba Diving in South Florida >>IMAGE. - Diving from John Pennekamp Coral Reef Park - Some of our favorite spots: Paddle Boarding: Wild Mangroves, Majestic Skylines, Nature... !) - Paddle-Boarding in Miami (Coming Soon!) Kayaking in Miami . - Kayaking in Miami (Coming Soon!) Send a Tip!Got something to share? Let us know!
0.001894
1 pc Iris Diaphragm (made of copper / aluminum) these are high quality item, usually used in camera or projection lenses, can also be used with some mount adapters for adjusting the aperture. We have different sizes, and this one is of dimension: outside diameter: appro 37.2mm thickness: appro 5.5mm Max. aperture diameter: appro 25mm min. aperture diameter: appro 1mm Adjustment lever protrudes appro 10mm from the housing the other sizes we have are: OD (20mm, 30mm, 53mm, 70mm, etc.), if interested, please send me an message Shipping Terms Payment Terms We accept PayPal only (all credit card payments have to be done through paypal). Payments need to be made within 3 days sale messages. Thanks for looking This item has been shown 8131 times.
0.002316
17-year-old student Mackenzie Wellfare from HSDC Alton in Hampshire is announced as the winner of the National Theatre’s annual playwriting competition for 14–19-year-olds. This year has seen more first drafts of scripts submitted to the competition than ever before, with Mackenzie’s play Perspective selected from over 400 final entries from 74 secondary schools and colleges across the UK. Inspired to write this play to share his own experience of autism as well as others’, Perspective by Mackenzie Wellfare explores the experiences of a teenage boy, Leo, with autism through his conversations with his best friend Shaun. Set in his bedroom, Leo’s big imagination fills the stage as he considers how the world sees him. Perspective was selected from a shortlist of nine plays by a panel of judges including NT’s Head of Play Development Nina Steiger, playwright and screenwriter Beth Steel, playwright and performer Mojisola Adebayo and Jenny Sealey, Artistic Director of Graeae Theatre Company. The play will be performed in a full production by professional actors at the National Theatre and will be streamed to participating schools across the UK to watch in July, alongside rehearsed readings of seven shortlisted plays as part of the digital festival of new writing. Following the production, Mackenzie will also take part in a live streamed Q&A about his play alongside the director. The digital festival will also showcase the work of a group of D\deaf students from Eastbury Community School’s Alternative Resource Provision. The students have taken part in playwriting workshops facilitated by Jenny Sealey, Artistic Director of Graeae Theatre and have developed scenes exploring their experiences of the world. A selection of these scenes entitled Conversation Breakdown will be directed by Jenny Sealey and performed as part of the rehearsed readings. Mackenzie Wellfare said, “I’m so excited to have won! To have my play performed is just unbelievable and I can’t wait to see how it turns out! I want to show a perspective of Autism that I believe hasn’t been truly shown in modern media, and yet which some people experience every day of their lives.” Jenny Sealey, Artistic Director of Graeae Theatre and member of the judging panel said, “Perspective has a matureness in its unpacking of the heart stuff. It’s an important play, beautifully simple in its mass of complexity.” This year the programme was delivered digitally through workshops with professional writers, a playwriting course and the opportunity to watch NT productions for free online, as well as a pre-recorded masterclass on writing for audio with Audible, the official Audio Partner of New Views. Students wrote their own original 30-minute plays, exploring topical issues from mental health and the pandemic to politics and relationships. Applications to take part in New Views 2021/2022 are now open. To register please visit
0.933609
and humans. Animals, when adjusted for human size and weight, make the equivalent of 5-15 grams of Vitamin C a day, mostly in their livers when stress free. Production can more than double when the animal is distressed (but so too grams of Vitamin C a day (this is 10,000-30,000mg). This makes the RDA of just 45mg look horribly inadequate. Why Do A Flush? A Vitamin C flush gives our system very, very high doses of vitamin C to the point where it totally saturates the system – and in the process, brings the immune system up and supports rapid healing. This is ideal anytime you might be feeling run down, are recovering from illness or trauma/surgery, or your immune system simply needs a boost. There are many factors which create a greater need for vitamin C: - increased oestrogen (are you on the pill or HRT?) - stress - infection - injury Using a qualitative EEG, researchers have found vitamin C to have an anti-anxiety effect. They also discovered that less than 1000mg of vitamin C could result in an increase in cholesterol. A 2001 study published in The Lancet found that people with the lowest blood levels of vitamin C were two times more likely to die of heart disease as those with the highest levels. Interestingly vitamin C also significantly slows glycation – the process by which glucose binds to proteins interfering with their normal function. Vitamin C Flush The Vitamin C Flush involves taking as much Vitamin C as your body can tolerate. When you’ve reached ‘bowel tolerance’, or the point at which you can longer absorb Vitamin C from your gut, you will experience an enema-like evacuation of liquid from your bowel. For this reason it is important you choose a day to do the flush when you can remain at home, near the bathroom. - Begin the cleanse first thing in the morning before you eat (you can, however, eat normally throughout the day) - Take 1000mg of vitamin C every hour – mix into half a glass of water (or diluted fruit juice) and drink/sip it over the course of a few minutes - Recording each time you take a dose – repeat this every hour, on the hour,*. I like to do a Vitamin C Flush at the beginning of winter to load my immune system up ready for cold and flu viruses but you could do this biannually if you wished, and for those needing immune support/repair you could consider doing it monthly or bimonthly (under the supervision of a Natural Healthcare Practitioner). As your health improves you may notice that, each time you do the Vitamin C Flush, you need less total Vitamin C to reach bowel tolerance. It is interesting to note that it appears the amount of Vitamin C which can be tolerated orally without producing diarrhoea, increases somewhat proportionately to how unwell you are (1,2,3,4). Most people will reach bowel tolerance at around 10-15 grams but, if the same person is acutely ill with a mild cold for example, that tolerance may increase to approximately 50 grams per 24 hours. A severe cold can increase tolerance to 100 grams; an influenza, even up to 150 grams; and mononucleosis or viral pneumonias, to as much as 200 grams per 24 hours. Large doses of Vitamin C should always be given in divided doses, and at these higher amounts – may need to be given as frequently as hourly Vitamin C Flush Tips -). I personally use this one as it is buffered, super potent (1000mg in just under 1/2 tsp) and a delicious fizzy orange flavour - Keep your water intake up as, when bowel tolerance is reached, you will lose some fluid through the bowel - You may get a little bloated, or even a bit gassy towards the end – keep going until you actually pass a watery stool - If you have been or are unwell, you may consider doubling your vitamin C dose each hour i.e. consume 2000mg of vitamin C every hour After The Flush Sudden discontinuation of megadoses of Vitamin C is believed to cause ‘rebound scurvy’. Although I have only seen this once myself, it is best to be safe and gradually reduce your Vitamin C intake over several days or weeks. Here’s what I recommend the day after ‘The Flush’: - Work out your bowel tolerance total. Look at what total amount of Vitamin C you consumed on the day of the flush to reach bowel tolerance*. E.g. if you took 1000mg of Vitamin C every hour for 6 hours and reached bowel tolerance on the 6th dose – your total daily consumption to bowel tolerance was 6000mg - Take a reduced dose. The following day you want to take 75% of the total daily dose you got to before losing bowel tolerance and take this on the day after the flush in 2-3 divided doses. Using the example above, 75% of 6000mg is 4500mg which would equate to 3 doses of 1500mg each over the course of the day after the flush - Gradually reduce your dose. Each day take a total of 1000mg less than you did the day before. So the second day after ‘The Flush’ (using the calculations above) you would take a total of 3500mg in divided doses. The following day, a total of 2500mg and so on – until you are down to 1000mg a day (this is an ideal maintenance dose) Warning You can’t ‘overdose’ on vitamin C per se. Any vitamin C that is unable to be absorbed is simply passed out through the bowel. However, there are some people for whom a vitamin C flush is not advised: - Do not do a Vitamin C flush if you have excess iron in your blood (haemochromatosis) – Vitamin C increases the uptake of iron from your gut and we don’t want this to happen - Do not do a Vitamin C flush if you have Gilbert’s disease (you will know if you have) - If you suffer from IBS or any inflammatory bowel disease, please speak to your healthcare practitioner about whether a Vitamin C Flush will be suitable, and what form of Vitamin C would be best Thanks for the tips Amie! I was just telling Tim last night that he needed to do this as he’s got an onset of the flu (or a severe cold) coming on! Perfect timing x My pleasure darling – I hope he gets better soon! X Hi I have swolen lymph nodes and was wondering if it was safe to do the flush I have them in the groin and I feel it is continuing to spread Hi there I’m sorry to hear you’re unwell; have you been to see your Dr yet? There are a few things that may be causing the swelling in your lymph nodes and it is very important that you have that medically assessed as soon as possible. If the swelling is a secondary result of an infection, a vitamin C flush (amongst other support, not just on its own), will be helpful. Thank you so much for this Amie, our entire household has been fighting a viral throat infection type thing two of the six in the household have been on antibiotics and still haven’t gotten over it. So this weekend we will all be doing the flush x Oh nooooo – that’s not good news Erica! I hope the flush is going well and that you all bounce back soon. I’m sure you would already have organised this but don’t forget to get straight onto some probiotics as soon as the antibiotics are finished. You might also want to consider some Zinc and Vitamin D too. Wishing you all a speedy recovery X Thank you Amie for this article. I’ve been experimenting with high doses of Vitamin C a few times, but I’ve just found your website, and discovered that it is contraindicated with Gilbert Syndrome. Can you please explain why? I’ve Gilbert Syndrome and I didn’t notice anything strange while taking the Vitamin C to bowel tolerance. Also Gilbert Syndrome doesn’t give me any symptoms, just high bilirubin in a blood test. Thank you! Hi Giuditta – the issue is only that high doses of vitamin C may give a false negative reading of urinary urobilinogen if you’re trying to diagnose it. Having already been diagnosed with Gilbert Syndrome, optimal vitamin c levels will only be useful for you! Dear Doctor Amie, Thank you for your immense insight on the power of this molecule; Ascorbic Acid. Would you please do s Hi Dr Hunter – thank you for your kind comment. I’m afraid your message appears to have been cut off so if there was anything further you’d like me to respond to please reply 🙂 Question re VitC flush – can it be used (adjusted) as pre colonoscopy prep – has to be better tasting than the prescribed preps which are so bad they put you off having regular checks even tho they are necessary. Hi Linda – it would be best to check with your specialist but I am almost certain it wouldn’t do the same job. There is another version of the flush which is designed to induce a laxative effect (and as such you will get bowel clearance) however it works by creating an osmotic gradient. What I mean by that is water is drawn into the bowel, creating the laxative effective. I don’t believe, however, that it would soften any hardened/trapped faecal matter (and move it along) like a prep kit does. The preparation that they use also has a stool softener and a peristaltic in it, which guarantees a complete emptying of the gut. This is absolutely necessary in order for a colonoscopy to be performed. Sorry I don’t have better news… Hello, Thanks for your article. Can the flush be done with pill form vitamin c? Hi Amy, great question. Given the time it would take for the tablet/capsule to dissolve and then be absorbed I don’t think it would work anywhere near as well – sorry 🙁 Thanks so much for the information! I did a flush today and then happened to get a call from my doctor with blood work. My iron levels are elevated. I don’t want to stop the vitamin C immediately, as it took me 50 grams to reach bowel tolerance. However, I don’t want to develops curvy either. Do you have any recommendations? Hi Chris, I can’t give you individual advice as I don’t know your case but my initial questions are: – what did your Dr say when you asked them about it? – what day are you at on the vitamin C flush (what daily dose are you taking)? – how elevated are your iron levels, and has the cause been identified? – what treatment has been proposed, if any? My first recommendation, if you haven’t done this already, is speak to your Dr. In the meantime, you could potentially reduce your daily vitamin C levels by 2-3g/day (rather than the typical 1g) to get back to baseline quicker whilst reducing the potential for rebound scurvy. Further to that, vitamin C does not increase iron levels per se but it does increase iron absorption from iron-containing foods in the gut (by 30% according to the evidence I’ve seen) so if you were not consuming any dietary iron it would not boost your iron levels any further. Were you advised to avoid all iron-containing foods? These are worth looking at too: – – – Wishing you all the best. Amie, I see you also say to start w/ 1000 mg which is .2 tsp. How does one measure .2 tsp? I see 1/4 tsp is 1250 mg and that is easy enough to measure, so why do most sites recommend 1000 mg? Hi Sue The amount of powdered vitamin C to deliver 1000mg will depend on what brand you buy. I personally use BioCeuticals Ultra Potent-C and 1 level metric teaspoon gives 4g of the powder, of which 2.45g is vitamin C (or 2450mg). For convenience sake, I just use half a teaspoon each dose knowing that I’m actually getting 1225mg each time. The key is to use a close approximation of 1000mg per hour, and to ensure each hourly dose is identical to the last. This means for example, you may end up calculating your overall daily intake (and subsequent titration down over the following days) in half teaspoons. Hope that makes sense? I only suspect I might have IBS would it be okay to do this? What could I potentially expect if I do have IBS? Hi Debbie If you are having gut issues it is best to have that investigated and diagnosed before introducing anything out of the ordinary. If your gut is inflamed, for whatever reason, using high dose vitamin C (especially if it contains the ascorbic acid form) may potentially have an aggravating effect. Can one use this flush if they have an autoimmune disease Hi Rose If you have an autoimmune condition, you will need to speak with your healthcare practitioner about whether the vitamin C flush is appropriate for you. They will be able to take into account your medical history, any medications, and unique physiological make-up to determine whether this is suitable for you or not.
0.996113
Pennsylvania Pennsylvania Northeast Rural Fire Assistance Grants 2004 The U.S. Fish and Wildlife Service's Rural Fire Assistance program awarded 38 grants worth $149,401 to fire departments in Maine, New Hampshire, Massachusetts, Rhode Island, Connecticut, New Jersey, New York, Pennsylvania, Delaware, and Virginia. A total of 172 fire departments within the northeast region have been identified as eligible to participate in the RFA program. Back to News Archives
0.435335
The Electoral Commission (EC) has expressed optimism that it will be able to declare the results of this year’s elections in less than 24 hours after polls had closed at 5 PM on December 7. The EC had resolved to release the results as early as possible. Speaking to TV3 on Tuesday December 1, the Director of Electoral Services at the elections management body, Dr Serebour Quaicoe explained that the Commission. At well organised party can get the results,” he said. He added that ” no political party can declare results. They can project their results. The media can tell us that genuinely they have gotten all the polling station results. So they can project who is winning a constituency” By Edem Tutu|3news.com|Ghana Discussion about this post
0.12417
TITLE: Probability of exactly one defective item in a sample of three.... QUESTION [0 upvotes]: In order to determine the quality of a shipment of 20 parts, a sample of 3 items is randomly selected without replacement from the shipment. Four of the 20 items in the shipment are actually defective. Let Y be a random variable representing the total number of defective items in the sample. Then P(Y = 1) is This is what I got so far, is this correct? $P(Y=1) = \frac{C^4_1C^{20-4}_{3-1}}{C^{20}_3} $ $ = \frac{4 \cdot 120} {1140} \approx 0.421$ REPLY [0 votes]: ...a sample of 3 items is randomly selected without replacement... This can be modeled using a hypergeometric distribution (because there is no replacement; therefore dependent) and there is only success and failure. Using this general formula: $P(X = k) = \dfrac{{a \choose k} {n - a \choose r - k}}{n \choose r}$ where there are $n$ items, $a$ of which are successes (defective). A random sample of $r$ items are taken, $k$ of which are successes. So, I say the variables correspond to the following values: n = 20 a = 4 r = 3 k = 1 $P(X = 1) = \dfrac{{4 \choose 1} {20 - 4 \choose 3 - 1}}{20 \choose 3}$ $P(X = k) = \dfrac{4 \times 120}{1140}$ $P(X = k) = \dfrac{8}{19}$
0.466281
Maidenhead by Mortimer Menpes, R.I.. Watercolor. Source: The Thames, 132..] This reach at Maidenhead, is one of the most popular on the river. On each side of the wide stone bridge half a mile below the lock, Taplow and Maidenhead face one another. But though popular and easy of access, being on the Great Western Railway, which runs quick trains at frequent intervals, both stations are a little distance from the river. The name Maidenhead is derived from Maiden-hithe, or wharf, as a large wharf for wood at one time stood near the bridge. The bridge itself, though a modern fabric, is of ancient lineage, for we know that in 1352 a guild was formed for the purpose of keeping it in repair. It may be remembered that bridges at that time were considered works of charity, and competed with masses and alms as a means of doing good posthumously. Another blissed besines is brigges to make, That there the pepul may not passe [die] after great showres. Dole it is to drawe a deed body oute of a lake, That was fulled in a fount-stoon, and a felow of ours. And in Piers Plowman: Therewith to build hospitals, helping the sick, Or roads that are rotten full rightly repair, Or bridges, when broken, to build up anew. The main road between London and Bath, a well-known coaching road, runs this way, and a very good road it is. The railway bridge crosses below the road, but it is of brick with wide arches, and is by no means unsightly. Between the two is the River-side club, where a band plays on the smooth green lawn in the season, and the smartest of smart costumes are the rule. Near here also is Bond's boat-house and a willow-grown islet. There are numbers of steps and railings and landing stages, all painted white, and these give a certain lightness to the scene. Close by the bridge are several hotels, of which the oldest established is Skindle's, low-lying and creeper-covered, on the Taplow side. Boats for hire line the banks everywhere, for many cater for the wants of the butterfly visitor, out of whom enough must be taken in the season to carry the establishments on through the winter; and the river visitor is essentially a butterfly. Few know the charms of the Thames in the winter, when, in an east and west stretch, the glowing red ball of the sun sinks behind dun banks of mist; when the trees are leafless, and the skeleton branches are outlined against a pale clear sky; when a touch of frost is in the air, and the river glides so stilly that it almost seems asleep. [132-33] >
0.004301
Weekend Box Office Report: October 20-22, 2017 Tyler Perry's Boo 2 scares up a win! Like last Halloween, Tyler Perry put on his wig and lipstick for audience amusement and/or terror to make TYLER PERRY'S BOO 2! A MADEA HALLOWEEN #1 this weekend with an estimated opening of $21.6 million! The spooky sequel (which marks writer-director Perry's eighth time in the dress as Madea) came out a bit lower than the $28.5 million start of last October's BOO!, which went on to a domestic total of $73.2 million. The PG-13 costume comedy came with a reported cost of $25 million. Critics didn't find the hilarity in Perry's latest theatrical haunting, giving the follow-up a dreadful 8% average on Rotten Tomatoes. Opening in second place was the new big-budget disaster movie GEOSTORM with $13.3 million. The feature directing debut of INDEPENDENCE DAY writer/producer Dean Devlin reportedly cost $120 million to create the movie's world-wrecking weather satellite system. The Gerard Butler sci-fi thriller also pulled down $49.6 million from international audiences for a $62.9 million worldwide opening weekend. Critics thought the PG-13 destruction event was a catastrophe, giving it an average of 13% on Rotten Tomatoes. You can find the JoBlo review HERE. The horror movie HAPPY DEATH DAY moved to third place with $9.3 million, a deep cut of 64% from its first place debut last week. The PG-13 thriller has a ten-day domestic total of $40.6 million and a worldwide total of $53.5 million (on a reported cost of just $4.8 million). Director Denis Villeneuve's BLADE RUNNER 2049 was in fourth with $7.1 million, bringing the R-rated sequel to a domestic total of $74 million. The follow-up to Ridley Scott's seminal 1982 sci-fi movie has a worldwide total of $194 million on a reported price tag of $155 million (it opens in China next week). In fifth place was the new fact-based drama ONLY THE BRAVE with an opening of $6 million. Despite talent like Josh Brolin and Jeff Bridges, the depiction of elite firefighters didn't seem to spark with moviegoers. The thriller from TRON LEGACY director Joseph Kosinski cost a reported $38 million. Critics warmed to the true story of the heroic "Granite Mountain Hotshots", giving the movie a 90% average on Rotten Tomatoes. The JoBlo review is HERE. The Jackie Chan thriller THE FOREIGNER was in sixth place with $5.4 million, dropping 58% from its opening last weekend. The R-rated action-drama from CASINO ROYALE director Martin Campbell has a ten-day domestic total of $22.8 million and a worldwide total of $111 million (on a cost of $35 million). IT was lurking in seventh place with $3.5 million. The R-rated Stephen King adaptation has grabbed a monstrous $320 million domestic and $651 million worldwide after seven weekends in theaters. Opening in eighth place was the new R-rated mystery THE SNOWMAN with $3.4 million. The Michael Fassbender thriller, which cost a reported $35 million, has made an additional $19.2 million overseas. Critics froze out the serial killer story (from LET THE RIGHT ONE IN director Tomas Alfredson) with a frigid 9% average on Rotten Tomatoes. The JoBlo review is HERE. Tom Cruise's R-rated crime-drama AMERICAN MADE and the espionage satire sequel KINGSMAN: THE GOLDEN CIRCLE closed out the list. Outside the chart, VICTORIA AND ABDUL, MY LITTLE PONY: THE MOVIE and THE LEGO NINJAGO MOVIE waved goodbye. The religious drama SAME KIND OF DIFFERENT AS ME opened on 1360 screens for $2.5 million, while in limited release, the Colin Farrell/Nicole Kidman horror indie THE KILLING OF A SACRED DEER had a decent $28k per-screen average. Next weekend revives the SAW series for Halloween season with JIGSAW, Miles Teller joins the military in THANK YOU FOR YOUR SERVICE, and Matt Damon and Julianne Moore get into mischief in the George Clooney-directed dark comedy SUBURBICON. What is your favorite disaster movie? VOTE HERE! ACCESS BOX-OFFICE ARCHIVES HERE
0.000883
The release of Cradle-7 brought a number of new options within WorkBench views. Expandable views, cross reference direction, showing multiple attributes in a cell, triggering frequently used commands and more. Earlier versions of Cradle allowed you to expand the view of a query when using Tree view. This feature has been added to both the Table view and the Document view within WorkBench. To expand and see the linked items, click the row number for the item of interest. A linked item can also be expanded to show multiple layers of traceability from the original item of interest. Once expanded the linked items show dots to represent the number of levels the view has expanded. One dot indicates that the item is directly connected. Two or more show the number of levels removed from the original item. Within a view you can now show the direction of a cross reference. This simple indicator provides a quick visual for the direction of relation without additional clicks. To setup the view to show link direction, choose the Data type of Xref Direction, provide an appropriate field name and save the view. The resulting view will contain a row with green arrows indicating the direction of the link. Customers have expressed an interest in being able to show multiple values within a single cell on a query. Select View Details on the view you wish to change. In an appropriate cell choose the Data type Multiple. This will show a button for Multiple Data Settings which is where you choose which items to include in this field. Choose the items you wish to show and the separator that will provide division of the data within the field. Provide a title for the field and save appropriately. The resulting view will contain the collected data. In order to limit the number of clicks required for regularly used actions, 3SL has added a feature which allows you to click and select these actions directly within a view. The example below contains commands to show item history and to create a new linked system requirement. In order to define the available commands, open View Details and select the View Properties tab. Select New and choose an available command for inclusion in the list. The available commands are: After choosing the command and appropriate sub settings you can save the view. Different views can have different sets of commands as needed to facilitate your process. The resulting view shows an arrow next to each item. Clicking the icon brings up the menu of available commands. There are a number of additional enhancements to views within WorkBench. Please see the Release Notes for additional details.
0.026
456Views6Replies Having problems with ARDUINO DUEMILANOVE? Answered. talked to sparkfun about it, even though it was a moth out of warranty, they replaced it for free!!!!!!!!!! Select as Best AnswerUndo Best Answer yes, i have, even i attempted the Java update process,(get info, select 32 bit checkbox etc.) Select as Best AnswerUndo Best Answer I just happened to click the link back to htis message. when you rpely to a person, you should press the reply button under their message, so they know you've responded. Had I not been bored, I would have never been back to this thread without a notifier (and that notifier would never have come, since ytou selected "answer" instead of Reply. Anyway...The best I can offer at this point is to either contact the people from whom you purchased the Arduino or wait for a more experienced Arduino afficianado to see your query. I know a bit about microcontrollers, but I don't have any direct exprience with this product. Personally, as one who'se worked proffessionally with a variety of microcontollers, industrial computers, amplifiers, etc, etc, I'd strongly urge you to contact the mfg. It;s prolly something silly, but it's always best to get assistance from those who built it if possible. best wishes. Select as Best AnswerUndo Best Answer Sorry about that. I bought it from Sparkfun Electronics... ... free day January 7th! I contacted them and they will be able to replace it if i cant get it to work, ive tried ot reinstall the software on Windows Vista, took it a day to install, next day, error mewssage pops up and says the device is taking too much power, the USB output will now shut off. weird huh? Select as Best AnswerUndo Best Answer hmm...yes...It sounds like there's a short of some kind in the Arduino or inbetween the Arduino and the computer's USB port, or possibly that the USB port itself has been damaged. You've tried plugging it into other ports? And have you tried plugging other USB devices into the USB port to make sure the USB port still works? Select as Best AnswerUndo Best Answer have you tried unplugging power from the Arduino, waiting 10-20 seconds, then repowering the device prior to transmission? Select as Best AnswerUndo Best Answer
0.000862
Keys to Success Reader - List Price: $30.00 - Binding: Paperback - Edition: 1 - Publisher: Prentice Hall - Publish date: 04/01/1999 - ISBN: 9780130107992 - ISBN10: 0130107999 Description: This reader is based upon several areas impacting new readers/students - student success general areas, success in specific disciplines, and topics for those who are career changers. Chapters are written by graduating seniors, business professionals, student affairs professionals and faculty.Expand description 3 Please Wait
0.020077
When I wrote this book, little did I know the about the mixed reaction of readers I would be facing. Some loved it, Some adored it and some expected more. And some called it a Bollywood IshStyle Romance. And exactly what I wanted to do with this novel. Who doesn't need that magical love in their life? READER QUESTION WHY WAS ARYAN SO CONFUSED ABOUT SHEFALI? Have you ever fallen in Love? When you know that the other person is just not right for you - yet your heart beats faster for him or her. When you know that the whole society will be against your love, yet you brave every antagonist who comes your way with all those feeling bubbling inside you. But the worst enemy of your love is you - yourself. Love is confusing - especially when your circumstances refuses to help you out. And Aryan suspected her for stealing something very valuable to his family - yet Love crept in his heart. I know I know, next question will be how can you love a person you don't trust. Will answer that next time. If you have a question, do pm me in my author page or email me at rubinaramesh1973@gmail.com GRAB A COPY PAPERBACK COMING SOON That sure is interesting. Good one Rubina Ramesh. And oh yeah, love can be quite confusing especially to an alpha male such as Aryan. Losing one's independence could be mind blowing. Loved the book, I must say You are too kind :) But more than Aryan, I feel every person who falls in love - initially they are very confused of what the future might hold. No?
0.009237
A homeowner in Calais was working in his yard when he found a stone urn buried there. Vermont state police say the man found a plastic bag buried in his yard that contained a number of items, including the gray stone urn. Police they believe the remains are those of a man, though they didn't explain what led to that conclusion. They suspect the contents of the bag were buried sometime between 1998 and 2000. Anyone with information is asked to contact state police in Middlesex at 802-229-9191. PO Box 4508 Burlington, VT 05406-4508 Primary Phone: 802-652-6300 Primary email: channel3@wcax.com
0.02358
1 Review Details Climb on. The sun might be beating down on your back while you boulder, but the La Sportiva Trucker Stripe Hat makes the hottest days more manageable so you can continue climbing comfortably. - Cotton - Item #LSP001Y Tech Specs Gearhead Help Tech Specs - Reviews - Q & A What do you think about this product? Have questions about this product? La Sportiva Trucker Stripe Hat - Familiarity: I've used it several times - Fit: True to size Great hat. Looks just like the picture. Mean business at the Crag My son had this hat on when we got to the crag. Instant credibility even when you are two. What are the size guidelines for the... What are the size guidelines for the small/medium vs. large/x-large options of this hat? I usually wear a 7 1/4 (or a tad larger) in fitted hats. What size should I go with in this one? Hey Evan, I called up Sportiva to see if they had a chart for their trucker hats and they don't really have any set in stone numbers to follow. There is a lot of overlap between sizes and they pretty much just expect you to go by the idea of having a large or small head. If you are kinda an average schmo either would probably work for you. Matches great with his dino sweater! What a great pic, and welcome to the Backcountry Community!
0.000983
Best psychologist in. Farwa Batool Naqvi - Psychologist - Sargodha - MSc (Psychology), Advance Diploma in Clinical Psychology, Diploma in Clinical Psychology - Less than 1 Year of Experience 22 minutes wait 99% of happy patients Practice Locations - opposite Rehamt-ul-lilalamin Park, Satellite Town, Sargodha, Punjab, Pakistan Reviews Ir * * * * * r ab * * * * * * n The doctor was very cooperative About Ms. Farwa Batool Naqvi Ms. Farwa Batool Naqvi is one of the top certified Psychologist in Sargodha, having the degrees of MSc (Psychology), Advance Diploma in Clinical Psychology, Diploma in Clinical Psychology. Currently, she is practising at Aleez Neuro Psychiatric Centre. Ms. Farwa Batool Naqvi. Farwa Batool Naqvi is available for consultation via Healthwire.pk. You can easily book an appointment with Ms. Farwa Batool Naqvi by calling at (+042 32500989) or by clicking on the “Book an Appointment” option. Education - Diploma in Clinical Psychology - Advance Diploma in Clinical Psychology - MSc (Psychology) Services Frequently Asked Questions (Note: Healthwire does not charge any fee for online booking appointments) Ms. Farwa Batool Naqvi regularly sees patient’s conditions and health issues related to Psychology. You may consult Ms. Farwa Batool Naqvi after specifying your visit reasons. You can simply choose a preferred slot from the Ms. Farwa Batool Naqvi’s availability and book an appointment with us through Healthwire.pk or call us at 042 32500989/021 37130261. Ms. Farwa Batool Naqvi is practicing at Aleez Neuro Psychiatric Centre.
0.020116
Your privacy is vital to us! Because we gather information regarding you, you must know: · The kinds of data we gather and just why we gather it; · exactly how we utilize that information; and · tips on how to upgrade that information or restriction or control exactly how it really is utilized. This privacy ("Privacy Policy") defines just exactly just how Dapper & Bride may gather, make use of and share information you offer when working with our solution, as well as your private information. The expression "Service" includes the web site dapperandbride.com and its own sibling web web web site lovekeewi.com particular and any associated content, computer computer software, applications, widgets, materials and/or solutions offered by or on the behalf of Dapper & Bride. The terms "we," "us" and "our" refer to Dapper & Bride and "you" refers to you, as a user of the Service for the purposes of...
0.026212
\begin{document} \title[Quasi $I-$nonexpansive mappins] {Weak and strong convergence of an implicit iterative process with errors for a finite family of asymptotically quasi $I-$nonexpansive mappings in Banach space} \author{Farrukh Mukhamedov} \address{Farrukh Mukhamedov\\ Department of Computational \& Theoretical Sciences \\ Faculty of Sciences, International Islamic University Malaysia\\ P.O. Box, 141, 25710, Kuantan\\ Pahang, Malaysia} \email{{\tt far75m@@yandex.ru}} \author{Mansoor Saburov} \address{Mansoor Saburov\\ Department of Computational \& Theoretical Sciences \\ Faculty of Science, International Islamic University Malaysia\\ P.O. Box, 141, 25710, Kuantan\\ Pahang, Malaysia} \email{{\tt msaburov@@gmail.com}} \begin{abstract} In this paper we prove the weak and strong convergence of the implicit iterative process with errors to a common fixed point of a finite family $\{T_j\}_{i=1}^N$ of asymptotically quasi $I_j-$nonexpansive mappings as well as a family of $\{I_j\}_{j=1}^N$ of asymptotically quasi nonexpansive mappings in the framework of Banach spaces. \vskip 0.3cm \noindent {\it Mathematics Subject Classification}: 46B20; 47H09; 47H10\\ {\it Key words}: Implicit iteration process with errors; a finite family of asymptotically quasi $I-$nonexpansive mapping; asymptotically quasi-nonexpansive mapping; a common fixed point; Banach space. \end{abstract} \maketitle \section{Introduction} Let $K$ be a nonempty subset of a real normed linear space $X$ and $T:K\to K$ be a mapping. Denote by $F(T)$ the set of fixed points of $T$, that is, $F(T) =\{x\in K: Tx = x\}$. Throughout this paper, we always assume that $F(T)\neq\emptyset$. Now let us recall some known definitions \begin{definition} A mapping $T:K\to K$ is said to be: \begin{itemize} \item[(i)] nonexpansive, if $\|Tx-Ty\|\le\|x-y\|$ for all $x,y\in K$; \item[(ii)] asymptotically nonexpansive, if there exists a sequence $\{\lambda_{n}\}\subset[1,\infty)$ with $\lim\limits_{n\to\infty}\lambda_{n}=1$ such that $\|T^nx-T^ny\|\le\lambda_n\|x-y\|$ for all $x,y\in K$ and $n\in\bn$; \item[(iii)] quasi-nonexpansive, if $\|Tx-p\|\le\|x-p\|$ for all $x\in K,\ p\in F(T)$; \item[(iv)] asymptotically quasi-nonexpansive, if there exists a sequence $\{\mu_n\}\subset [1,\infty)$ with $\lim\limits_{n\to\infty}\mu_n=1$ such that $\|T^nx-p\|\le\mu_n\|x-p\|$ for all $x\in K,\ p\in F(T)$ and $n\in\bn$. \end{itemize} \end{definition} Note that from the above definitions, it follows that a nonexpansive mapping must be asymptotically nonexpansive, and an asymptotically nonexpansive mapping must be asymptotically quasi-nonexpansive, but the converse does not hold (see \cite{GK2}). If $K$ is a closed nonempty subset of a Banach space and $T:K\to K$ is nonexpansive, then it is known that $T$ may not have a fixed point (unlike the case if $T$ is a strict contraction), and even when it has, the sequence $\{x_n\}$ defined by $x_{n+1} = Tx_n $ (the so-called \emph{Picard sequence}) may fail to converge to such a fixed point. In \cite{[Browder65]}-\cite{[Browder67]} Browder studied the iterative construction for fixed points of nonexpansive mappings on closed and convex subsets of a Hilbert space. Note that for the past 30 years or so, the study of the iterative processes for the approximation of fixed points of nonexpansive mappings and fixed points of some of their generalizations have been flourishing areas of research for many mathematicians (see for more details \cite{GK2},\cite{Chidumi}). In \cite{[DaizMetcalf67]} Diaz and Metcalf studied quasi-nonexpansive mappings in Banach spaces. Ghosh and Debnath \cite{[GhoshDebnath]} established a necessary and sufficient condition for convergence of the Ishikawa iterates of a quasi-nonexpansive mapping on a closed convex subset of a Banach space. The iterative approximation problems for nonexpansive mapping, asymptotically nonexpansivemapping and asymptotically quasi-nonexpansive mapping were studied extensively by Goebel and Kirk \cite{[GeobelKrik]}, Liu \cite{liu}, Wittmann \cite{[Wittmann]}, Reich \cite{R}, Gornicki \cite{[Gornicki]}, Shu \cite{[Shu]} Shoji and Takahashi \cite{ST}, Tan and Xu \cite{[TanXu]} et al. in the settings of Hilbert spaces and uniformly convex Banach spaces. There are many methods for approximating fixed points of a nonexpansive mapping. Xu and Ori \cite{XO} introduced implicit iteration process to approximate a common fixed point of a finite family of nonexpansive mappings in a Hilbert space. Namely, let $H$ be a Hilbet space, $K$ be a nonempty closed convex subset of $H$ and $\{T_j\}_{j=1}^N: K\to K$ be nonexpansive mappings. Then Xu and Ori's implicit iteration process $\{x_n\}$ is defined by \begin{eqnarray}\label{xo} \left\{ \begin{array}{cc} x_0\in K,\\ x_n = (1-\a_n)x_{n-1}+ \a_nT_nx_n;\\ \end{array} \right. \ \ n\geq 1 \end{eqnarray} where $T_n = T_{n (mod N)}$, $\{\a_n\}$ is a real sequence in $(0, 1)$. They proved the weak convergence of the sequence $\{x_n\}$ defined by \eqref{xo} to a common fixed point $p\in F =\cap_{j=1}^N F(T_j)$. In 2003, Sun \cite{[Sun]} introduced the following implicit iterative sequence $\{x_n\}$ \begin{eqnarray}\label{sun} \left\{ \begin{array}{cc} x_0\in K,\\ x_n = (1-\a_n)x_{n-1}+ \a_nT_{j(n)}^{k(n)}x_n;\\ \end{array} \right. \ \ n\geq 1 \end{eqnarray} for a finite set of asymptotically quasi-nonexpansive self-mappings on a bounded closed convex subset $K$ of a Banach space $X$ with $\{\a_n\}$ a sequence in $(0, 1)$. Where $n =(k(n)-1)N + j(n)$, $j(n)\in\{1, 2,\dots,N\}$, and proved the strong convergence of the sequence $\{x_n\}$ defined by \eqref{sun} to a common fixed point $p\in F =\cap_{j=1}^N F(T_j)$. There many papers devoted to the implicit iteration process for a finite family of asymptotically nonexpansive mappings, asymptotically quasi-expansive mappings in Banach spaces (see for example \cite{CTL,CS,G,GL,liu,ZC}). On the other hand, there are many concepts which generalize a notion of nonexpansive mapping. One of such concepts is $I$-nonexpansivity of a mapping $T$(\cite{Shah}). Let us recall some notions. \begin{definition} Let $T:K\to K$, $I:K\to K$ be two mappings of a nonempty subset $K$ of a real normed linear space $X$. Then $T$ is said to be: \begin{itemize} \item[(i)] {\it $I-$nonexpansive}, if $\|Tx-Ty\|\le\|Ix-Iy\|$ for all $x,y\in K$; \item[(i)] {\it asymptotically $I-$nonexpansive}, if there exists a sequence $\{\lambda_{n}\}\subset[1,\infty)$ with $\lim\limits_{n\to\infty}\lambda_{n}=1$ such that $\|T^nx-T^ny\|\le\lambda_n\|I^nx-I^ny\|$ for all $x,y\in K$ and $n\ge 1$; \item[(iii)] {\it asymptotically quasi $I-$nonexpansive} mapping, if there exists a sequence $\{\mu_n\}\subset [1,\infty)$ with $\lim\limits_{n\to\infty}\mu_n=1$ such that $\|T^nx-p\|\le\mu_n\|I^nx-p\|$ for all $x\in K,\ p\in F(T)\cap F(I)$ and $n\ge 1.$ \end{itemize} \end{definition} \begin{remark} If $F(T)\cap F(I)\neq \emptyset$ then an asymptotically $I-$nonexpansive mapping is asymptotically quasi $I-$nonexpansive. But, there exists a nonlinear continuous asymptotically quasi $I-$nonexpansive mappings which is asymptotically $I-$nonexpansive. \end{remark} Indeed, let us consider the following example. Let $X=\ell_2$ and $K=\{{\xb}\in\ell_2:\ \|{\xb}\|\leq 1\}$. Define the following mappings: \begin{eqnarray}\label{T} && T(x_1,x_2,\dots,x_n,\dots)=(0,x_1^4,x_2^4,\dots,x_n^4,\dots),\\[2mm] \label{I} &&I(x_1,x_2,\dots,x_n,\dots)=(0,x_1^2,x_2^2,\dots,x_n^2,\dots). \end{eqnarray} One see that $F(T)=F(I)=(0,0,\dots,0,\dots)$. Therefore, from $\sum\limits_{k=1}^\infty x_k^8\leq\sum\limits_{k=1}^\infty x_k^4$ whenever ${\xb}\in K$, using \eqref{T},\eqref{I} we obtain $\|T{\xb}\|\leq\|I{\xb}\|$ for every ${\xb}\in K$. So, $T$ is quasi $I$-expansive. But for ${\xb}_0=(1,0,\dots,0,\dots)$ and ${\yb}_0=(1/2,0,\dots,0,\dots)$ we have $$ \|T({\xb}_0)-T({\yb}_0)\|=\frac{15}{16}, \ \ \ \|I({\xb}_0)-I({\yb}_0)\|=\frac{3}{4} $$ which means that $T$ is not $I$-nonexpansive. Note that in \cite{TG} the weakly convergence theorem for $I$-asymptotically quasi-nonexpansive mapping defined in Hilbert space was proved. Best approximation properties of $I$-nonexpansive mappings were investigated in \cite{Shah}. In \cite{KKJ} the weak convergence of three-step Noor iterative scheme for an $I$-nonexpansive mappping in a Banach space has been established. Very recently, in \cite{T} the weak and strong convergence of implicit iteration process to a common fixed point of a finite family of $I$-asymptotically nonexpansive mappings were studied. Let us describe the iteration scheme considered in \cite{T}. Let $K$ be a nonempty convex subset of a real Banach space $X$ and $\{T_j\}_{j=1}^N:K\to K$ be a finite family of asymptotically $I_j-$nonexpansive mappings, $\{I_j\}_{j=1}^N:K\to K$ be a finite family of asymptotically nonexpansive mappings. Then the iteration process $\{x_n\}$ has been defined by \begin{eqnarray}\label{implicitmap} \left\{ \begin{array}{ccc} x_1\in K,\\ x_{n+1} = & (1-\alpha_n) x_{n}+\alpha_n I_{j(n)}^{k(n)}y_n,\\ y_n = & (1-\beta_n) x_n+\beta_n T_{j(n)}^{k(n)}x_n, \end{array}\right. \ \ n\geq 1 \end{eqnarray} here as before $n=(k(n)-1)N+j(n)$, $j(n)\in\{1,2,\dots,N\}$, and $\{\alpha_n\},$ $\{\beta_n\}$ are two sequences in $[0,1]$. From this formula one can easily see that the employed method, indeed, is not implicit iterative processes. The used process is some kind of modified Ishikawa iteration. Therefore, in this paper we shall extend of the implicit iterative process with errors, defined in \cite{[Sun],fk}, to a family of $I$-asymptotically quasi-nonexpansive mappings defined on a uniformly convex Banach space. Namely, let $K$ be a nonempty convex subset of a real Banach space $X$ and $\{T_j\}_{j=1}^N:K\to K$ be a finite family of asymptotically quasi $I_j-$nonexpansive mappings, and $\{I_j\}_{j=1}^N:K\to K$ be a family of asymptotically quasi-nonexpansive mappings. We consider the following implicit iterative scheme $\{x_n\}$ with errors: \begin{eqnarray}\label{implicitmap} \left\{ \begin{array}{ccc} x_0\in K,\\ x_n = & \alpha_n x_{n-1}+\beta_n T_{j(n)}^{k(n)}y_n+\gamma_nu_n\\ y_n = & \widehat{\alpha}_n x_n+\widehat{\beta}_n I_{j(n)}^{k(n)}x_n+\widehat{\gamma}_nv_n \end{array}\right. \ \ n\geq 1 \end{eqnarray} where $\{\alpha_n\},$ $\{\beta_n\},$ $\{\gamma_n\},$ $\{\widehat{\alpha}_n\},$ $\{\widehat{\beta}_n\}$, $\{\widehat{\gamma}_n\}$ are six sequences in $[0,1]$ satisfying $\alpha_n+\beta_n+\gamma_n=\widehat{\alpha}_n+\widehat{\beta}_n+\widehat{\gamma}_n=1$ for all $n\ge1$, as well as $\{u_n\}$, $\{v_n\}$ are bounded sequences in $K.$ In this paper we shall prove the weak and strong convergence of the implicit iterative process \eqref{implicitmap} to a common fixed points of $\{T_j\}_{j=1}^N$ and $\{I_j\}_{j=1}^N$. All results presented here generalize and extend the corresponding main results of \cite{[Sun]}, \cite{XO},\cite{fk},\cite{gou}. \section{Preliminaries} Throughout this paper, we always assume that $X$ is a real Banach space. We denote $F(T)$ and $D(T)$ the set of fixed points and the domain of a mapping $T,$ respectively. Recall that a Banach space $X$ is said to satisfy \emph{Opial condition} \cite{[Opial]}, if for each sequence $\{x_{n}\}$ in $X,$ the condition that the sequence $x_n\to x$ weakly implies that \begin{eqnarray}\label{Opialcondition} \liminf\limits_{n\to\infty}\|x_n-x\|< \liminf\limits_{n\to\infty}\|x_n-y\| \end{eqnarray} for all $y\in X$ with $y\neq x.$ It is well known that (see \cite{[LamiDozo]}) inequality \eqref{Opialcondition} is equivalent to \begin{eqnarray*} \limsup\limits_{n\to\infty}\|x_n-x\|< \limsup\limits_{n\to\infty}\|x_n-y\| \end{eqnarray*} \begin{definition} Let K be a closed subset of a real Banach space $X$ and $T:K\to K$ be a mapping. \begin{itemize} \item[(i)]A mapping $T$ is said to be semi-closed (demi-closed) at zero, if for each bounded sequence $\{x_n\}$ in $K,$ the conditions $x_n$ converges weakly to $x\in K$ and $Tx_n$ converges strongly to $0$ imply $Tx=0.$ \item[(ii)] A mapping $T$ is said to be semi-compact, if for any bounded sequence $\{x_n\}$ in $K$ such that $\|x_n-Tx_n\|\to 0, (n\to\infty),$ then there exists a subsequence $\{x_{n_k}\}\subset\{x_n\}$ such that $x_{n_k}\to x^*\in K$ strongly. \item[(iii)] $T$ is called a uniformly $L-$Lipschitzian mapping, if there exists a constant $L>0$ such that $\|T^nx-T^ny\|\le L\|x-y\|$ for all $x,y\in K$ and $n\ge 1.$ \end{itemize} \end{definition} \begin{proposition}\label{commonLandlambda} Let $K$ be a nonempty subset of a real Banach space $X,$ $\{T_j\}_{j=1}^N:K\to K$ and $\{I_j\}_{j=1}^N:K\to K$ be finite familis of mappings. \begin{itemize} \item[(i)] If $\{T_j\}_{j=1}^N$ is a finite family of asymptotically $I_j-$nonexpansive (resp. asymptotically quasi $I_j-$nonexpansive) mappings with sequences $\{\lambda^{(j)}_n\}\subset[1,\infty),$ then there exists a sequence $\{\lambda_n\}\subset[1,\infty)$ such that $\{T_j\}_{i=1}^N$ is a finite family of asymptotically $I_j-$nonexpansive (resp. asymptotically quasi $I_j-$nonexpansive) mappings with a common sequence $\{\lambda_n\}\subset[1,\infty).$ \item[(ii)] If $\{T_j\}_{j=1}^N$ is a finite family of uniformly $L_j-$Lipschitzian mappings, then there exists a constant $L>0$ such that $\{T_j\}_{j=1}^N$ is a finite family of uniformly $L-$Lipschitzian mappings. \end{itemize} \end{proposition} \begin{pf} We shall prove (i) part of Proposition \ref{commonLandlambda}. Analogously one can prove (ii) part. Without any loss of generality, we may assume that $\{T_j\}_{j=1}^N$ is a finite family of asymptotically $I_j-$nonexpansive mappings with sequences $\{\lambda^{(j)}_n\}\subset[1,\infty).$ Then we have $$\|T^n_jx-T^n_jy\|\le\lambda^{(j)}_n\|I^n_jx-I^n_jy\|, \qquad \forall x,y\in K,\ \forall n\ge1,\ j=\overline{1,N}.$$ Let $\lambda_n=\max\limits_{j=\overline{1,N}}\lambda^{(j)}_n.$ Then $\{\lambda_n\}\subset[1,\infty)$ with $\lambda_n\to 1,$ $n\to\infty$ and $$\|T^n_jx-T^n_jy\|\le\lambda_n\|I^n_jx-I^n_jy\|, \ \ \forall n\ge1,$$ for all $x,y\in K,$ and each $j=\overline{1,N}.$ This means that $\{T_j\}_{i=1}^N$ is a finite family of asymptotically $I_j-$nonexpansive mappings with a common sequence $\{\lambda_n\}\subset[1,\infty).$ \end{pf} The following lemmas play an important role in proving our main results. \begin{lemma}[ see \cite{[Shu]}]\label{convexxnyn} Let $X$ be a uniformly convex Banach space and $b,c$ be two constants with $0<b<c<1.$ Suppose that $\{t_n\}$ is a sequence in $[b,c]$ and $\{x_n\},$ $\{y_n\}$ are two sequences in $X$ such that \begin{eqnarray*} \lim\limits_{n\to\infty}\|t_nx_n+(1-t_n)y_n\|=d, \ \ \ \limsup\limits_{n\to\infty}\|x_n\|\le d, \ \ \ \limsup\limits_{n\to\infty}\|y_n\|\le d, \end{eqnarray*} holds some $d\ge 0.$ Then $\lim\limits_{n\to\infty}\|x_n-y_n\|=0.$ \end{lemma} \begin{lemma}[see \cite{[TanXu]}]\label{convergean} Let $\{a_n\},$ $\{b_n\}$ and $\{c_n\}$ be three sequences of nonnegative real numbers with $\sum\limits_{n=1}^{\infty}b_n<\infty$ and $\sum\limits_{n=1}^{\infty}c_n<\infty.$ If the following conditions is satisfied $$a_{n+1}\le(1+b_n)a_n+c_n, \ \ \ n\ge 1,$$ then the limit $\lim\limits_{n\to\infty}a_n$ exists. \end{lemma} \section{Main results} In this section we shall prove our main results concerning weak and strong convergence of the sequence defined by \eqref{implicitmap}. To formulate ones, we need some auxiliary results. \begin{lemma}\label{limexistsxnminusp} Let $X$ be a real Banach space and $K$ be a nonempty closed convex subset of $X.$ Let $\{T_j\}_{j=1}^N:K\to K$ be a finite family of asymptotically quasi $I_j-$nonexpansive mappings with a common sequence $\{\lambda_n\}\subset[1,\infty)$ and $\{I_j\}_{j=1}^N:K\to K$ be a finite family of asymptotically quasi-nonexpansive mappings with a common sequence $\{\mu_n\}\subset[1,\infty)$ such that $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)\neq\emptyset.$ Suppose $B^{*}=\sup\limits_{n}\beta_n,$ $\Lambda=\sup\limits_{n}\lambda_n\ge1,$ $M=\sup\limits_{n}\mu_n\ge1$ and $\{\alpha_n\},$ $\{\beta_n\},$ $\{\gamma_n\},$ $\{\widehat{\alpha}_n\},$ $\{\widehat{\beta}_n\},$ $\{\widehat{\gamma}_n\}$ are six sequences in $[0,1]$ which satisfy the following conditions: \begin{itemize} \item[(i)] $\alpha_n+\beta_n+\gamma_n=\widehat{\alpha}_n+\widehat{\beta}_n+\widehat{\gamma}_n=1, \ \ \ \forall n\ge1,$ \item[(ii)] $\sum\limits_{n=1}^{\infty}(\lambda_n\mu_n-1)\beta_n<\infty,$ \item[(iii)] $B^{*}<\dfrac{1}{\Lambda^2M^2},$ \item[(iv)] $\sum\limits_{n=1}^\infty\gamma_n<\infty, \ \ \sum\limits_{n=1}^\infty\widehat{\gamma}_n<\infty.$ \end{itemize} Then for the implicit iterative sequence $\{x_n\}$ with errors defined by \eqref{implicitmap} and for each $p\in F$ the limit $\lim\limits_{n\to\infty}\|x_n-p\|$ exists. \end{lemma} \begin{pf} Since $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)\neq\emptyset,$ for any given $p\in F,$ it follows from \eqref{implicitmap} that \begin{eqnarray*} \|x_n-p\| &=& \|\alpha_n(x_{n-1}-p)+\beta_n(T^{k(n)}_{j(n)}y_n-p)+\gamma_n(u_n-p)\|\\ &\le& (1-\beta_n-\gamma_n)\|x_{n-1}-p\|+\beta_n\|T^{k(n)}_{j(n)}y_n-p\|+\gamma_n\|u_n-p\|\\ &\le& (1-\beta_n-\gamma_n)\|x_{n-1}-p\|+\beta_n\lambda_{k(n)}\|I^{k(n)}_{j(n)}y_n-p\|+\gamma_n\|u_n-p\|\\ &\le& (1-\beta_n-\gamma_n)\|x_{n-1}-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\|y_n-p\|+\gamma_n\|u_n-p\|\\ &\le& (1-\beta_n)\|x_{n-1}-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\|y_n-p\|+\gamma_n\|u_n-p\|. \end{eqnarray*} Again using \eqref{implicitmap} we find \begin{eqnarray}\label{inequlatyforyn} \|y_n-p\| &=& \|\widehat\alpha_n(x_n-p)+\widehat\beta_n(I^{k(n)}_{j(n)}x_n-p)+\widehat\gamma_n(v_n-p)\|\nonumber\\ &\le& (1-\widehat\beta_n-\widehat\gamma_n)\|x_n-p\|+\widehat\beta_n\|I^{k(n)}_{j(n)}x_n-p\|+\widehat\gamma_n\|v_n-p\|\nonumber\\ &\le& (1-\widehat\beta_n-\widehat\gamma_n)\|x_n-p\|+\widehat\beta_n\mu_{k(n)}\|x_n-p\|+\widehat\gamma_n\|v_n-p\|\nonumber\\ &\le& (1-\widehat\beta_n)\|x_n-p\|+\widehat\beta_n\mu_{k(n)}\|x_n-p\|+\widehat\gamma_n\|v_n-p\|\nonumber\\ &\le& (1-\widehat\beta_n)\mu_{k(n)}\|x_n-p\|+\widehat\beta_n\mu_{k(n)}\|x_n-p\|+\widehat\gamma_n\|v_n-p\|\nonumber\\ &\le& \mu_{k(n)}\|x_n-p\|+\widehat\gamma_n\|v_n-p\| \nonumber\\ &\le& \lambda_{k(n)}\mu_{k(n)}\|x_n-p\|+\widehat\gamma_n\|v_n-p\|. \end{eqnarray} Then from \eqref{inequlatyforyn} we have \begin{eqnarray*} \|x_n-p\| &\le& (1-\beta_n)\|x_{n-1}-p\|+\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}\|x_n-p\|\\ &&\qquad \qquad +\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|, \end{eqnarray*} so one gets \begin{eqnarray}\label{inequlatyforxn} (1-\beta_n\lambda^2_{k(n)}\mu^2_{k(n)})\|x_n-p\| &\le& (1-\beta_n)\|x_{n-1}-p\| \nonumber\\ &&+\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|. \end{eqnarray} By condition (iii) we obtain $\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}\le B^{*}\Lambda^2 M^2<1,$ and therefore $$1-\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}\ge 1-B^{*}\Lambda^2 M^2>0.$$ Hence from \eqref{inequlatyforxn} we derive \begin{eqnarray*} \|x_n-p\| &\le& \frac{1-\beta_n}{1-\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}}\|x_{n-1}-p\|\\ &&\qquad\qquad+\frac{\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|}{1-\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}}\\ &=& \left(1+\frac{(\lambda^2_{k(n)}\mu^2_{k(n)}-1)\beta_n}{1-\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}}\right)\|x_{n-1}-p\|\\ &&\qquad\qquad +\frac{\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|}{1-\beta_n\lambda^2_{k(n)}\mu^2_{k(n)}}\\ &\le& \left(1+\frac{(\lambda^2_{k(n)}\mu^2_{k(n)}-1)\beta_n}{1-B^{*}\Lambda^2M^2}\right)\|x_{n-1}-p\|\\ &&\qquad\qquad +\frac{\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|}{1-B^{*}\Lambda^2M^2}. \end{eqnarray*} Let $$b_n=\dfrac{(\lambda^2_{k(n)}\mu^2_{k(n)}-1)\beta_n}{1-B^{*}\Lambda^2M^2}, \ \ \ \ \ c_n=\dfrac{\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|}{1-B^{*}\Lambda^2M^2}.$$ Then the last inequality has the following form \begin{eqnarray}\label{VIinequalityforxn} \|x_n-p\| &\le& \left(1+b_n\right)\|x_{n-1}-p\|+c_n. \end{eqnarray} From the condition (ii) we find \begin{eqnarray*} \sum\limits_{n=1}^\infty b_n &=& \frac{1}{1-B^{*}\Lambda^2M^2}\sum\limits_{n=1}^{\infty}(\lambda^2_{k(n)}\mu^2_{k(n)}-1)\beta_n\\ &=&\frac{1}{1-B^{*}\Lambda^2M^2}\sum\limits_{n=1}^{\infty}(\lambda_{k(n)}\mu_{k(n)}-1)(\lambda_{k(n)}\mu_{k(n)}+1)\beta_n\\ &\le& \frac{\Lambda M+1}{1-B^{*}\Lambda^2M^2}\sum\limits_{n=1}^{\infty}(\lambda_{k(n)}\mu_{k(n)}-1)\beta_n<\infty, \end{eqnarray*} and boundedness of the sequences $\{\|u_n-p\|\},$ $\{\|v_n-p\|\}$ with (iv) implies \begin{eqnarray*} \sum\limits_{n=1}^\infty c_n &=& \sum\limits_{n=1}^\infty\frac{\gamma_n\|u_n-p\|+\beta_n\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|}{1-B^{*}\Lambda^2M^2}\\ &\le& \frac{1}{1-B^{*}\Lambda^2M^2}\sum\limits_{n=1}^\infty\gamma_n\|u_n-p\|+\frac{B^{*}\Lambda M}{1-B^{*}\Lambda^2M^2}\sum\limits_{n=1}^\infty\widehat\gamma_n\|v_n-p\|<\infty. \end{eqnarray*} Now taking $a_n=\|x_{n-1}-p\|$ in \eqref{VIinequalityforxn} we obtain \begin{eqnarray*} a_{n+1}\le(1+b_n)a_n+c_n, \end{eqnarray*} and according to Lemma \ref{convergean} the limit $\lim\limits_{n\to\infty}a_n$ exists. This means the limit \begin{eqnarray}\label{limofxnminusp} \lim\limits_{n\to\infty}\|x_n-p\|=d \end{eqnarray} exists, where $d\ge0$ is a constant. This completes the proof. \end{pf} Now we are ready to prove a general criteria of strong convergence of \eqref{implicitmap}. \begin{theorem}\label{criteria} Let $X$ be a real Banach space and $K$ be a nonempty closed convex subset of $X.$ Let $\{T_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_1-$Lipschitzian asymptotically quasi $I_j-$nonexpansive mappings with a common sequence $\{\lambda_n\}\subset[1,\infty)$ and $\{I_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_2-$Lipschitzian asymptotically quasi-nonexpansive mappings with a common sequence $\{\mu_n\}\subset[1,\infty)$ such that $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)\neq\emptyset.$ Suppose $B^{*}=\sup\limits_{n}\beta_n,$ $\Lambda=\sup\limits_{n}\lambda_n\ge1,$ $M=\sup\limits_{n}\mu_n\ge1$ and $\{\alpha_n\},$ $\{\beta_n\},$ $\{\gamma_n\},$ $\{\widehat{\alpha}_n\},$ $\{\widehat{\beta}_n\},$ $\{\widehat{\gamma}_n\}$ are six sequences in $[0,1]$ which satisfy the following conditions: \begin{itemize} \item[(i)] $\alpha_n+\beta_n+\gamma_n=\widehat{\alpha}_n+\widehat{\beta}_n+\widehat{\gamma}_n=1, \ \ \ \forall n\ge1,$ \item[(ii)] $\sum\limits_{n=1}^{\infty}(\lambda_n\mu_n-1)\beta_n<\infty,$ \item[(iii)] $B^{*}<\dfrac{1}{\Lambda^2M^2},$ \item[(iv)] $\sum\limits_{n=1}^\infty\gamma_n<\infty, \ \ \sum\limits_{n=1}^\infty\widehat{\gamma}_n<\infty.$ \end{itemize} Then the implicit iterative sequence $\{x_n\}$ with errors defined by \eqref{implicitmap} converges strongly to a common fixed point in $F$ if and only if \begin{eqnarray}\label{conditionforcriteria} \liminf\limits_{n\to\infty}d(x_n,F)=0. \end{eqnarray} \end{theorem} \begin{pf} The necessity of condition \eqref{conditionforcriteria} is obvious. Let us proof the sufficiency part of the theorem. Since $T_j,I_j:K\to K$ are uniformly $L_1,L_2-$Lipschitzian mappings, respectively, $T_j$ and $I_j$ are continuous mappings, for each $j=\overline{1,N}$. Therefore, the sets $F(T_j)$ and $F(I_j)$ are closed, for each $j=\overline{1,N}$. Hence $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)$ is a nonempty closed set. For any given $p\in F,$ we have (see \eqref{VIinequalityforxn}) \begin{eqnarray}\label{xnminusp} \|x_n-p\| &\le& \left(1+b_n\right)\|x_{n-1}-p\|+c_n, \end{eqnarray} where $\sum\limits_{n=1}^\infty b_n<\infty$ and $\sum\limits_{n=1}^\infty c_n<\infty.$ Hence, we find \begin{eqnarray}\label{dxnF} d(x_n,F) &\le& \left(1+b_n\right)d(x_{n-1},F)+c_n \end{eqnarray} So, the inequality \eqref{dxnF} with Lemma \ref{convergean} implies the existence of the limit $\lim\limits_{n\to\infty}d(x_n,F)$. By condition \eqref{conditionforcriteria}, one gets \begin{eqnarray*} \lim\limits_{n\to\infty}d(x_n,F)&=& \liminf\limits_{n\to\infty}d(x_n,F)=0. \end{eqnarray*} Let us prove that the sequence $\{x_n\}$ converges to a common fixed point of $\{T_j\}_{j=1}^N$ and $\{I_j\}_{j=1}^N.$ In fact, due to $1+t\le \exp(t)$ for all $t>0,$ and from \eqref{xnminusp}, one finds \begin{eqnarray}\label{xnminuspandexp} \|x_n-p\| &\le& \exp(b_n)\|x_{n-1}-p\|+c_n. \end{eqnarray} Hence, for any positive integer $m,n,$ from \eqref{xnminuspandexp} we find \begin{eqnarray}\label{xnplusmp} \|x_{n+m}-p\| &\le& \exp(b_{n+m})\|x_{n+m-1}-p\|+c_{n+m}\nonumber\\ &\le& \exp(b_{n+m}+b_{n+m-1})\|x_{n+m-2}-p\|\nonumber\\ &&+c_{n+m}+c_{n+m-1}\exp(b_{n+m})\nonumber\\ &\le& \exp(b_{n+m}+b_{n+m-1}+b_{n+m-2})\|x_{n+m-3}-p\|\nonumber\\ &&+c_{n+m}+c_{n+m-1}\exp(b_{n+m})+c_{n+m-2}\exp(b_{n+m}+b_{n+m-1})\nonumber\\ &&\vdots\nonumber \\ &\le& \exp\left(\sum\limits_{i=n+1}^{n+m} b_i\right)\|x_n-p\|+c_{n+m}+\sum\limits_{j=n+1}^{n+m-1}c_{j}\exp\left(\sum\limits_{i=j+1}^{n+m} b_i\right)\nonumber\\ &\le& \exp\left(\sum\limits_{i=n+1}^{n+m} b_i\right)\|x_n-p\|\nonumber\\ &&+c_{n+m}\exp\left(\sum\limits_{i=n+1}^{n+m} b_i\right)+\sum\limits_{j=n+1}^{n+m-1}c_{j}\exp\left(\sum\limits_{i=n+1}^{n+m} b_i\right)\nonumber\\ &\le& \exp\left(\sum\limits_{i=n+1}^{n+m} b_i\right)\left(\|x_n-p\|+\sum\limits_{j=n+1}^{n+m}c_j\right)\nonumber\\ &\le& \exp\left(\sum\limits_{i=1}^{\infty} b_i\right)\left(\|x_n-p\|+\sum\limits_{j=n+1}^{\infty}c_j\right)\nonumber\\ &\le& W\left(\|x_n-p\|+\sum\limits_{j=n+1}^{\infty}c_j\right), \end{eqnarray} for all $p\in F$, where $W=\exp\left(\sum\limits_{i=1}^{\infty} b_i\right)<\infty.$ Since $\lim\limits_{n\to\infty}d(x_n,F)=0$ and $\sum\limits_{j=1}^{\infty} c_j<\infty,$ for any given $\varepsilon >0,$ there exists a positive integer number $n_0$ such that \begin{eqnarray*}d(x_{n_0},F)<\frac{\varepsilon}{2W}, \qquad \sum\limits_{j=n_0+1}^{\infty} c_j<\frac{\varepsilon}{2W}. \end{eqnarray*} Therefore there exists $p_1\in F$ such that \begin{eqnarray*}\|x_{n_0}-p_1\|<\frac{\varepsilon}{2W},\qquad \sum\limits_{j=n_0+1}^{\infty} c_j<\frac{\varepsilon}{2W}. \end{eqnarray*} Consequently, for all $n\ge n_0$ from \eqref{xnplusmp} we have \begin{eqnarray*} \|x_{n}-p_1\| &\le& W\left(\|x_{n_0}-p_1\|+\sum\limits_{j=n_0+1}^{\infty}c_j\right)\\ &<&W\cdot\frac{\varepsilon}{2W}+W\cdot\frac{\varepsilon}{2W}\\ &=& \varepsilon, \end{eqnarray*} this means that the sequence $\{x_n\}$ converges strongly to a common fixed point $p_1$ of $\{T_j\}_{j=1}^N$ and $\{I_j\}_{j=1}^N.$ This completes the proof. \end{pf} To prove main results we need one more an auxiliary result. \begin{proposition}\label{xnTxn&xnIxn} Let $X$ be a real uniformly convex Banach space and $K$ be a nonempty closed convex subset of $X.$ Let $\{T_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_1-$Lipschitzian asymptotically quasi $I_j-$nonexpansive mappings with a common sequence $\{\lambda_n\}\subset[1,\infty)$ and $\{I_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_2-$Lipschitzian asymptotically quasi-nonexpansive mappings with a common sequence $\{\mu_n\}\subset[1,\infty)$ such that $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)\neq\emptyset.$ Suppose $B_{*}=\inf\limits_{n}\beta_n,$ $B^{*}=\sup\limits_{n}\beta_n,$ $\Lambda=\sup\limits_{n}\lambda_n\ge1,$ $M=\sup\limits_{n}\mu_n\ge1$ and $\{\alpha_n\},$ $\{\beta_n\},$ $\{\gamma_n\},$ $\{\widehat{\alpha}_n\},$ $\{\widehat{\beta}_n\},$ $\{\widehat{\gamma}_n\}$ are six sequences in $[0,1]$ which satisfy the following conditions: \begin{itemize} \item[(i)] $\alpha_n+\beta_n+\gamma_n=\widehat{\alpha}_n+\widehat{\beta}_n+\widehat{\gamma}_n=1, \ \ \ \forall n\ge1,$ \item[(ii)] $\sum\limits_{n=1}^{\infty}(\lambda_n\mu_n-1)\beta_n<\infty,$ \item[(iii)] $0<B_{*}\le B^{*}<\dfrac{1}{\Lambda^2M^2}<1,$ \item[(iv)] $0<\widehat B_{*}=\inf\limits_{n}\widehat\beta_n\le\sup\limits_{n}\widehat\beta_n=\widehat B^{*}<1,$ \item[(v)] $\sum\limits_{n=1}^\infty\gamma_n<\infty, \ \ \sum\limits_{n=1}^\infty\widehat{\gamma}_n<\infty.$ \end{itemize} Then the implicit iterative sequence $\{x_n\}$ with errors defined by \eqref{implicitmap} satisfies the following $$\lim\limits_{n\to\infty}\|x_n-T_jx_n\|=0, \qquad \lim\limits_{n\to\infty}\|x_n-I_jx_n\|=0, \qquad \forall j=\overline{1,N}.$$ \end{proposition} \begin{pf} First, we shall prove that $$\lim\limits_{n\to\infty}\|x_n-T^{k(n)}_{j(n)}x_n\|=0, \qquad \lim\limits_{n\to\infty}\|x_n-I^{k(n)}_{j(n)}x_n\|=0.$$ According to Lemma \ref{limexistsxnminusp} for any $p\in F$ we have \begin{eqnarray}\label{xnminuspequald} \lim\limits_{n\to\infty}\|x_n-p\|=d. \end{eqnarray} So, the sequence $\{x_n\}$ is bounded in $K$. It follows from \eqref{implicitmap} that \begin{eqnarray}\label{xnpnd} \|x_n-p\|&=& \|(1-\beta_n)(x_{n-1}-p+\gamma_n(u_n-x_{n-1}))\nonumber\\ &&\qquad \qquad+\beta_n(T^{k(n)}_{j(n)}y_n-p+\gamma_n(u_n-x_{n-1}))\|. \end{eqnarray} Due to condition (v) and boundedness of the sequences $\{u_n\}$ and $\{x_n\}$ we have \begin{eqnarray}\label{xnpgammanun} \limsup\limits_{n\to\infty}\|x_{n-1}&-&p+\gamma_n(u_n-x_{n-1})\|\le \nonumber \\ &\le& \limsup\limits_{n\to\infty}\|x_{n-1}-p\|+\limsup\limits_{n\to\infty}\gamma_n\|u_n-x_{n-1}\|=d.\qquad\quad \end{eqnarray} By means of asymptotically quasi $I_j-$nonexpansivity of $T_j$ and asymptotically quasi-nonexpansivity of $I_j$ from \eqref{inequlatyforyn} and boundedness of $\{u_n\},$ $\{v_n\},$ $\{x_n\}$ with condition (v) we obtain \begin{eqnarray}\label{Tnynp} \limsup\limits_{n\to\infty}&\|&T^{k(n)}_{j(n)}y_n-p+\gamma_n(u_n-x_{n-1})\ \ \ \|\nonumber\\ &\le& \limsup\limits_{n\to\infty}\lambda_{k(n)}\mu_{k(n)}\|y_n-p\|+\limsup\limits_{n\to\infty}\gamma_n\|u_n-x_{n-1}\|\nonumber\\ &\le& \limsup\limits_{n\to\infty}\|y_n-p\|\nonumber\\ &\le& \limsup\limits_{n\to\infty}\lambda_{k(n)}\mu_{k(n)}\|x_n-p\|+\limsup\limits_{n\to\infty}\widehat\gamma_n\|v_n-p\|=d. \end{eqnarray} Now using \eqref{xnpgammanun}, \eqref{Tnynp} and applying to Lemma \ref{convexxnyn} to \eqref{xnpnd} one finds \begin{eqnarray}\label{xnTnyn} \lim\limits_{n\to\infty}\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|=0. \end{eqnarray} From \eqref{implicitmap}, \eqref{xnTnyn} and condition (v) we infer that \begin{eqnarray}\label{xnxnminus1} \lim\limits_{n\to\infty}\|x_n-x_{n-1}\| &=& \lim\limits_{n\to\infty}\|\beta_n(T^{k(n)}_{j(n)}y_n-x_{n-1})+\gamma_n(u_n-x_{n-1})\|=0.\qquad \end{eqnarray} From \eqref{xnxnminus1} one can get \begin{eqnarray}\label{xnxnminusj} \lim\limits_{n\to\infty}\|x_n-x_{n+j}\| &=& 0, \qquad j=\overline{1,N}. \end{eqnarray} On the other hand, we have \begin{eqnarray*} \|x_{n-1}-p\| &\le& \|x_{n-1}-T^{k(n)}_{j(n)}y_n\|+\|T^{k(n)}_{j(n)}y_n-p\|\\ &\le& \|x_{n-1}-T^{k(n)}_{j(n)}y_n\|+\lambda_{k(n)}\mu_{k(n)}\|y_n-p\|, \end{eqnarray*} which means \begin{eqnarray*} \|x_{n-1}-p\|-\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|\le\lambda_{k(n)}\mu_{k(n)}\|y_n-p\|. \end{eqnarray*} The last inequality with \eqref{inequlatyforyn} implies that \begin{eqnarray*} \|x_{n-1}-p\|-\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|&\le& \lambda_{k(n)}\mu_{k(n)}\|y_n-p\|\le\\ &\le& \lambda^2_{k(n)}\mu^2_{k(n)}\|x_n-p\|+\lambda_{k(n)}\mu_{k(n)}\widehat\gamma_n\|v_n-p\|. \end{eqnarray*} Then condition (v) and \eqref{xnTnyn}, \eqref{xnminuspequald} with the Squeeze theorem yield \begin{eqnarray}\label{ynpd} \lim\limits_{n\to\infty}\|y_n-p\|=d \end{eqnarray} Again from \eqref{implicitmap} we can see that \begin{eqnarray}\label{ynptod} \|y_n-p\|=\|(1-\widehat\beta_n)(x_n-p&+&\widehat\gamma_n(v_n-x_n))\nonumber\\ &+&\widehat\beta_n(I^{k(n)}_{j(n)}x_n-p+\widehat\gamma_n(v_n-x_n))\|. \end{eqnarray} From \eqref{xnminuspequald} with condition (v) one finds \begin{eqnarray*} \limsup\limits_{n\to\infty}\|x_n&-&p+\widehat\gamma_n(v_n-x_n)\|\le \nonumber \\ &\le& \limsup\limits_{n\to\infty}\|x_n-p\|+\limsup\limits_{n\to\infty}\widehat\gamma_n\|v_n-x_n\|=d.\qquad\quad \end{eqnarray*} and \begin{eqnarray*} \limsup\limits_{n\to\infty}\|I^{k(n)}_{j(n)}x_n&-&p+\widehat\gamma_n(v_n-x_n)\|\nonumber\\ &\le& \limsup\limits_{n\to\infty}\mu_{k(n)}\|x_n-p\|+\limsup\limits_{n\to\infty}\widehat\gamma_n\|v_n-x_{n}\|\nonumber\\ &\le& \limsup\limits_{n\to\infty}\|x_n-p\|=d. \end{eqnarray*} Now applying Lemma \ref{convexxnyn} to \eqref{ynptod} we obtain \begin{eqnarray}\label{xnInxn} \lim\limits_{n\to\infty}\|x_n-I^{k(n)}_{j(n)}x_n\|=0. \end{eqnarray} Consider \begin{eqnarray*} \|x_n-T^{k(n)}_{j(n)}x_n\| &\le& \|x_n-x_{n-1}\|+\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|+\|T^{k(n)}_{j(n)}y_n-T^{k(n)}_{j(n)}x_n\|\\ &\le& \|x_n-x_{n-1}\|+\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|+L_1\|y_n-x_n\|\\ &=& \|x_n-x_{n-1}\|+\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|\\ &&+L_1\|\widehat\beta_n(I^{k(n)}_{j(n)}x_n-x_n)+\widehat\gamma_n(v_n-x_n)\|\\ &\le& \|x_n-x_{n-1}\|+\|x_{n-1}-T^{k(n)}_{j(n)}y_n\|\\ &&+L_1\widehat\beta_n\|I^{k(n)}_{j(n)}x_n-x_n\|+L_1\widehat\gamma_n\|v_n-x_n\|. \end{eqnarray*} Then from \eqref{xnTnyn}, \eqref{xnxnminus1}, \eqref{xnInxn} and condition (v) we get \begin{eqnarray}\label{xnTnxn} \lim\limits_{n\to\infty}\|x_n-T^{k(n)}_{j(n)}x_n\|=0. \end{eqnarray} Now we prove that $$\lim\limits_{n\to\infty}\|x_n-T_jx_n\|=0, \qquad \lim\limits_{n\to\infty}\|x_n-I_jx_n\|=0, \qquad \forall j=\overline{1,N}.$$ For each $n>N$ we have $n\equiv n-N({\rm{mod}} N)$ and $n=(k(n)-1)N+j(n),$ hence $n-N=((k(n)-1)-1)N+j(n)=(k(n-N)-1)N+j(n-N),$ i.e. $$k(n-N)=k(n)-1, \qquad j(n-N)=j(n).$$ So letting $T_n :=T_{j(n)({\rm{mod}} N)}$ we obtain \begin{eqnarray*} \|x_n-T_nx_n\| &\le& \|x_n-T^{k(n)}_{j(n)}x_n\|+\|T^{k(n)}_{j(n)}x_n-T_{j(n)}x_n\|\\ &\le& \|x_n-T^{k(n)}_{j(n)}x_n\| +L_1\|T^{{k(n)}-1}_{j(n)}x_n-x_n\|\\ &\le& \|x_n-T^{k(n)}_{j(n)}x_n\|+ L_1\|T^{{k(n)}-1}_{j(n)}x_n-T^{{k(n)}-1}_{j(n-N)}x_{n-N}\|\\ && +L_1\|T^{{k(n)}-1}_{j(n-N)}x_{n-N}-x_{n-N}\|+L_1\|x_{n-N}-x_n\|\\ &\le& \|x_n-T^{k(n)}_{j(n)}x_n\| +L^2_1\|x_n-x_{n-N}\| \\ && +L_1\|T^{{k(n-N)}}_{j(n-N)}x_{n-N}-x_{n-N}\|+L_1\|x_{n-N}-x_n\|\\ &\le& \|x_n-T^{k(n)}_{j(n)}x_n\| +L_1(L_1+1)\|x_n-x_{n-N}\|\\ &&+L_1\|T^{{k(n-N)}}_{j(n-N)}x_{n-N}-x_{n-N}\| \end{eqnarray*} which with \eqref{xnxnminusj}, \eqref{xnTnxn} implies \begin{eqnarray}\label{xnTxn} \lim\limits_{n\to\infty}\|x_n-T_nx_n\|=0. \end{eqnarray} Analogously, one has \begin{eqnarray*} \|x_n-I_nx_n\| &\le& \|x_n-I^{k(n)}_{j(n)}x_n\| +L_2(L_2+1)\|x_n-x_{n-N}\|\\ &&+L_2\|I^{{k(n-N)}}_{k(n-N)}x_{n-N}-x_{n-N}\|. \end{eqnarray*} which with \eqref{xnxnminusj}, \eqref{xnInxn} implies \begin{eqnarray}\label{xnIxn} \lim\limits_{n\to\infty}\|x_n-I_nx_n\|=0. \end{eqnarray} For any $j=\overline{1,N},$ from \eqref{xnxnminusj} and \eqref{xnTxn} we have \begin{eqnarray*} \|x_n-T_{n+j}x_n\|&\le& \|x_n-x_{n+j}\|+\|x_{n+j}-T_{n+j}x_{n+j}\|+\|T_{n+j}x_{n+j}-T_{n+j}x_n\|\\ &\le& (1+L_1)\|x_n-x_{n+j}\|+\|x_{n+j}-T_{n+j}x_{n+j}\|\to 0 \ \ (n\to\infty), \end{eqnarray*} which implies that the sequence \begin{eqnarray}\label{xnTnjxn} \bigcup\limits_{j=1}^N\bigl\{\|x_n-T_{n+j}x_n\|\bigr\}_{n=1}^\infty\to 0 \ \ (n\to\infty). \end{eqnarray} Analogously we have \begin{eqnarray*} \|x_n-I_{n+j}x_n\| &\le& (1+L_2)\|x_n-x_{n+j}\|+\|x_{n+j}-I_{n+j}x_{n+j}\|\to 0 \ \ (n\to\infty), \end{eqnarray*} and \begin{eqnarray}\label{xnInjxn} \bigcup\limits_{j=1}^N\bigl\{\|x_n-I_{n+j}x_n\|\bigr\}_{n=1}^\infty\to 0 \ \ (n\to\infty). \end{eqnarray} According to \begin{eqnarray*} \bigl\{\|x_n-T_jx_n\|\bigr\}_{n=1}^\infty &=& \bigl\{\|x_n-T_{n+(j-n)}x_n\|\bigr\}_{n=1}^\infty\\ &=& \bigl\{\|x_n-T_{n+j_n}x_n\|\bigr\}_{n=1}^\infty\subset \bigcup\limits_{l=1}^N\bigl\{\|x_n-T_{n+l}x_n\|\bigr\}_{n=1}^\infty, \end{eqnarray*} and \begin{eqnarray*} \bigl\{\|x_n-I_jx_n\|\bigr\}_{n=1}^\infty &=& \bigl\{\|x_n-I_{n+(j-n)}x_n\|\bigr\}_{n=1}^\infty\\ &=& \bigl\{\|x_n-I_{n+j_n}x_n\|\bigr\}_{n=1}^\infty\subset \bigcup\limits_{l=1}^N\bigl\{\|x_n-I_{n+l}x_n\|\bigr\}_{n=1}^\infty, \end{eqnarray*} where $j-n\equiv j_n({\rm{mod}} N),$ $j_n\in \{1,2,\cdots,N\},$ from \eqref{xnTnjxn}, \eqref{xnInjxn} we find \begin{eqnarray}\label{xnTjxnxnIjxn} \lim\limits_{n\to\infty}\|x_n-T_jx_n\|=0, \qquad \lim\limits_{n\to\infty}\|x_n-I_jx_n\|=0, \qquad \forall j=\overline{1,N}. \end{eqnarray} \end{pf} Now we are ready to formulate one of main result concerning weak convergence of the sequence $\{x_n\}$. \begin{theorem}\label{weakconvergence} Let $X$ be a real uniformly convex Banach space satisfying Opial condition and $K$ be a nonempty closed convex subset of $X.$ Let $E:X\to X$ be an identity mapping, $\{T_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_1-$Lipschitzian asymptotically quasi $I_j-$nonexpansive mappings with a common sequence $\{\lambda_n\}\subset[1,\infty)$ and $\{I_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_2-$Lipschitzian asymptotically quasi-nonexpansive mappings with a common sequence $\{\mu_n\}\subset[1,\infty)$ such that $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)\neq\emptyset.$ Suppose $B_{*}=\inf\limits_{n}\beta_n,$ $B^{*}=\sup\limits_{n}\beta_n,$ $\Lambda=\sup\limits_{n}\lambda_n\ge1,$ $M=\sup\limits_{n}\mu_n\ge1$ and $\{\alpha_n\},$ $\{\beta_n\},$ $\{\gamma_n\},$ $\{\widehat{\alpha}_n\},$ $\{\widehat{\beta}_n\},$ $\{\widehat{\gamma}_n\}$ are six sequences in $[0,1]$ which satisfy the following conditions: \begin{itemize} \item[(i)] $\alpha_n+\beta_n+\gamma_n=\widehat{\alpha}_n+\widehat{\beta}_n+\widehat{\gamma}_n=1, \ \ \ \forall n\ge1,$ \item[(ii)] $\sum\limits_{n=1}^{\infty}(\lambda_n\mu_n-1)\beta_n<\infty,$ \item[(iii)] $0<B_{*}\le B^{*}<\dfrac{1}{\Lambda^2M^2}<1,$ \item[(iv)] $0<\widehat B_{*}=\inf\limits_{n}\widehat\beta_n\le\sup\limits_{n}\widehat\beta_n=\widehat B^{*}<1,$ \item[(v)] $\sum\limits_{n=1}^\infty\gamma_n<\infty, \ \ \sum\limits_{n=1}^\infty\widehat{\gamma}_n<\infty.$ \end{itemize} If the mappings $E-T_j$ and $E-I_j$ are semi-closed at zero for every $j=\overline{1,N},$ then the implicit iterative sequence $\{x_n\}$ with errors defined by \eqref{implicitmap} converges weakly to a common fixed point of finite families of asymptotically quasi $I_j-$nonexpansive mappings $\{T_j\}_{j=1}^N$ and asymptotically quasi-nonexpansive mappings $\{I_j\}_{j=1}^N.$ \end{theorem} \begin{pf} Let $p\in F$, the according to Lemma \ref{limexistsxnminusp} the sequence $\{\|x_n-p\|\}$ converges. This provides that $\{x_n\}$ is bounded. Since $X$ is uniformly convex, then every bounded subset of $X$ is weakly compact. From boundedness of $\{x_n\}$ in $K,$ we find a subsequence $\{x_{n_k}\}\subset\{x_n\}$ such that $\{x_{n_k}\}$ converges weakly to $q\in K.$ Hence from \eqref{xnTjxnxnIjxn}, it follows that $$\lim\limits_{n_k\to\infty}\|x_{n_k}-T_jx_{n_k}\|=0, \ \lim\limits_{n_k\to\infty}\|x_{n_k}-I_jx_{n_k}\|=0, \qquad \forall j=\overline{1,N}.$$ Since the mappings $E-T_j$ and $E-I_j$ are semi-closed at zero, therefore we have $T_jq=q$ and $I_jq=q,$ for all $j=\overline{1,N},$ which means $q\in F.$ Finally, we prove that $\{x_n\}$ converges weakly to $q.$ In fact, suppose the contrary, then there exists some subsequence $\{x_{n_j}\}\subset\{x_n\}$ such that $\{x_{n_j}\}$ converges weakly to $q_1\in K$ and $q_1\neq q$. Then by the same method as given above, we can also prove that $q_1\in F.$ Taking $p=q$ and $p=q_1$ and using the same argument given in the proof of \eqref{limofxnminusp}, we can prove that the limits $\lim\limits_{n\to\infty}\|x_n-q\|$ and $\lim\limits_{n\to\infty}\|x_n-q_1\|$ exist, and we have $$\lim\limits_{n\to\infty}\|x_n-q\|=d, \ \ \ \lim\limits_{n\to\infty}\|x_n-q_1\|=d_1,$$ where $d, d_1$ are two nonnegative numbers. By virtue of the Opial condition of $X$, one finds \begin{eqnarray*} d &=& \limsup\limits_{n_k\to\infty}\|x_{n_k}-q\| < \limsup\limits_{n_k\to\infty}\|x_{n_k}-q_1\|=\\ &=& \limsup\limits_{n_j\to\infty}\|x_{n_j}-q_1\| < \limsup\limits_{n_j\to\infty}\|x_{n_j}-q\| = d \end{eqnarray*} This is a contradiction. Hence $q_1=q.$ This implies that $\{x_n\}$ converges weakly to $q.$ This completes the proof of Theorem \ref{weakconvergence}. \end{pf} Next, we prove strong convergence theorem \begin{theorem}\label{strongconvergence} Let $X$ be a real uniformly convex Banach space and $K$ be a nonempty closed convex subset of $X.$ Let $\{T_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_1-$Lipschitzian asymptotically quasi $I_j-$nonexpansive mappings with a common sequence $\{\lambda_n\}\subset[1,\infty)$ and $\{I_j\}_{j=1}^N:K\to K$ be a finite family of uniformly $L_2-$Lipschitzian asymptotically quasi-nonexpansive mappings with a common sequence $\{\mu_n\}\subset[1,\infty)$ such that $F=\bigcap\limits_{j=1}^N \left(F(T_j)\cap F(I_j)\right)\neq\emptyset.$ Suppose $B_{*}=\inf\limits_{n}\beta_n,$ $B^{*}=\sup\limits_{n}\beta_n,$ $\Lambda=\sup\limits_{n}\lambda_n\ge1,$ $M=\sup\limits_{n}\mu_n\ge1$ and $\{\alpha_n\},$ $\{\beta_n\},$ $\{\gamma_n\},$ $\{\widehat{\alpha}_n\},$ $\{\widehat{\beta}_n\},$ $\{\widehat{\gamma}_n\}$ are six sequences in $[0,1]$ which satisfy the following conditions: \begin{itemize} \item[(i)] $\alpha_n+\beta_n+\gamma_n=\widehat{\alpha}_n+\widehat{\beta}_n+\widehat{\gamma}_n=1, \ \ \ \forall n\ge1,$ \item[(ii)] $\sum\limits_{n=1}^{\infty}(\lambda_n\mu_n-1)\beta_n<\infty,$ \item[(iii)] $0<B_{*}\le B^{*}<\dfrac{1}{\Lambda^2M^2}<1,$ \item[(iv)] $0<\widehat B_{*}=\inf\limits_{n}\widehat\beta_n\le\sup\limits_{n}\widehat\beta_n=\widehat B^{*}<1,$ \item[(v)] $\sum\limits_{n=1}^\infty\gamma_n<\infty, \ \ \sum\limits_{n=1}^\infty\widehat{\gamma}_n<\infty.$ \end{itemize} If at least one mapping of the mappings $\{T_j,I_j\}_{j=1}^N$ is semi-compact, then the implicit iterative sequence $\{x_n\}$ with errors defined by \eqref{implicitmap} converges strongly to a common fixed point of finite families of asymptotically quasi $I_j-$nonexpansive mappings $\{T_j\}_{j=1}^N$ and asymptotically quasi-nonexpansive mappings $\{I_j\}_{j=1}^N.$ \end{theorem} \begin{pf} Without any loss of generality, we may assume that $T_1$ is semi-compact. This with \eqref{xnTjxnxnIjxn} means that there exists a subsequence $\{x_{n_k}\}\subset\{x_n\}$ such that $x_{n_k}\to x^{*}$ strongly and $x^{*}\in K.$ Since $T_j,I_j$ are continuous, then from \eqref{xnTjxnxnIjxn}, for all $j=\overline{1,N}$ we find $$\|x^{*}-T_jx^{*}\|=\lim\limits_{n_k\to\infty}\|x_{n_k}-T_jx_{n_k}\|=0, \ \ \ \ \|x^{*}-I_jx^{*}\|=\lim\limits_{n_k\to\infty}\|x_{n_k}-I_jx_{n_k}\|=0.$$ This shows that $x^{*}\in F.$ According to Lemma \ref{limexistsxnminusp} the limit $\lim\limits_{n\to\infty}\|x_n-x^{*}\|$ exists. Then $$\lim\limits_{n\to\infty}\|x_n-x^{*}\|=\lim\limits_{n_k\to\infty}\|x_{n_k}-x^{*}\|=0,$$ which means $\{x_n\}$ converges to $x^{*}\in F.$ This completes the proof. \end{pf} \begin{remark} If we take $\gamma_n = 0$, for all $n\in\bn$ and $I_j=E$, $j=1,2,\dots,N$ then the above theorem becomes Theorem 3.3 due to Sun \cite{[Sun]}. If $I_j=E$, $j=1,2,\dots,N$ and $\{T_j\}_{j=1}^N$ are asymptotically nonexpansive, then we got main results of \cite{CTL,gou}. \end{remark} \section*{acknowledgments} The authors acknowledges the MOSTI grant 01-01-08-SF0079.
0.001253
April 1998 Comic Book Sales to Comics Shops Estimated Comics Preordered by North American Comics Shops Based on Reports from Diamond Comic Distributors April 1998 was a lackluster month in all regards, with comics preorders in the Top 300 off a full 25% from the previous April. Retailer reports to Comics Retailer magazine found high levels of sell-through for Superman Forever, Battle Chasers, and Spirit: The New Adventures, which would be the last hit for Kitchen Sink before its closure. Most notably, Diamond, reflecting the increased importance of trade paperbacks, began listing indexed sales for its Top 25. Click to skip to the Top Graphic Novels for the month. This list includes all items on Diamond's Top 300 chart. Items marked (Res) were offered in an earlier month and resolicited; disregard any previously reported figure, as those earlier orders were canceled. Diamond did not provide dollar rankings for items after 250th place; "999" is entered for sorting purposes. The links lead to current listings for each issue on eBay. You can also find the books at your comics shop. April 1998 Graphic Novel Sales to Comics Shops Estimated Graphic Novels & Trade Paperbacks Preordered by North American Comics Shops Based on Reports from Diamond Comic Distributors This list includes all items on Diamond's Top 25 charts. If you don't see a book, Diamond released no data for it. The links lead to details about each title on Amazon. You can also find the books at your comics shop. The first Top 25 Trade Paperback list was led by an adult title — and one which wasn't really a trade paperback, but rather a higher-priced comic, Verotik Illustrated #3. Often, comics over a certain cover price wound up in the graphic novel charts; this was the case as well, early on, with off variants like the $4 Vampi aschan editon below.
0.025482
\begin{document} \begin{center} \uppercase{\bf Previous Player's Positions of Impartial Three-Dimensional Chocolate-Bar Games } \vskip 20pt {\bf Ryohei Miyadera }\\ {\smallit Keimei Gakuin Junior and High School, Kobe City, Japan}. \\ {\tt runnerskg@gmail.com} \vskip 10pt {\bf Hikaru Manabe}. \\ {\smallit Keimei Gakuin Junior and High School, Kobe City, Japan}. \\ {\tt urakihebanam@gmail.com} \vskip 10pt {\bf Shunsuke Nakamura}. \\ {\smallit Independent Researcher, Tokyo, Japan }. \\ {\tt nakamura.stat@gmail.com} \end{center} \vskip 20pt \centerline{\smallit Received: , Revised: , Accepted: , Published: } \vskip 30pt \pagestyle{myheadings} \markright{\smalltt INTEGERS: 19 (2019)\hfill} \thispagestyle{empty} \baselineskip=12.875pt \vskip 30pt \centerline{\bf Abstract} \noindent In this study, we investigate three-dimensional chocolate bar games, which are variants of the game of Chomp. A three-dimensional chocolate bar is a three-dimensional array of cubes in which a bitter cubic box is present in some part of the bar. Two players take turns and cut the bar horizontally or vertically along the grooves. The player who manages to leave the opponent with a single bitter block is the winner. We consider the $\mathcal{P}$-positions of this game, where the $\mathcal{P}$-positions are positions of the game from which the previous player (the player who will play after the next player) can force a win, as long as they play correctly at every stage. We present sufficient conditions for the case when the position $\{p,q,r\}$ is a $\mathcal{P}$-position if and only if $(p-1) \oplus (q-1) \oplus (r-1)$, where $p, q$, and $r$ are the length, height, and width of the chocolate bar, respectively. \pagestyle{myheadings} \markright{\smalltt \hfill} \thispagestyle{empty} \baselineskip=12.875pt \vskip 30pt \section{Introduction}\label{introductionsection} Chocolate bar games are variants of the Chomp game presented in \cite{gale}. A two-dimensional chocolate bar is a two-dimensional array of squares in which a bitter square printed in black is present in some part of the bar. See the chocolate bars in Figure \ref{two2dchoco}. A three-dimensional chocolate bar is a three-dimensional array of cubes in which a bitter cubic box printed in black is present in some parts of the bar. Figure \ref{two3dchoco} displays examples of three-dimensional chocolate bars. Games involving these chocolate bars may be defined as follows. \begin{defn}\label{definitionofchoco} (i) Two-dimensional chocolate bar game: Each player in turn breaks the bar in a straight line along the grooves and eats the broken piece. The player who manages to leave the opponent with a single bitter block (black block) is the winner. \\ (ii) Three-dimensional chocolate game: The rules are the same as in (i), except that the chocolate is cut horizontally or vertically along the grooves. Examples of cutting three-dimensional chocolate bars are shown in Figure \ref{3dcut}. \end{defn} \begin{pict} \begin{tabular}{cc} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=1.4cm]{twochoco.eps} \label{two2dchoco} \end{minipage} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=1.8cm]{two3d.eps} \label{two3dchoco} \end{minipage} \end{tabular} \end{pict} \begin{exam} Three methods of cutting a three-dimensional chocolate bar.\\ \begin{pict} \includegraphics[height=2.7cm]{3choco.eps} \label{3dcut} \end{pict} \noindent \end{exam} For completeness, we briefly review some of the necessary concepts of combinatorial game theory; refer to \cite{lesson} for greater detail. Let $Z_{\ge 0}$ and $N$ be sets of non-negative integers and natural numbers, respectively. \begin{defn}\label{definitionfonimsum11} Let $x$ and $y$ be non-negative integers. Expressing them in Base 2 yields $x = \sum_{i=0}^n x_i 2^i$ and $y = \sum_{i=0}^n y_i 2^i$ with $x_i,y_i \in \{0,1\}$. We define nim-sum $x \oplus y$ as: \begin{equation} x \oplus y = \sum\limits_{i = 0}^n {{w_i}} {2^i}. \end{equation} where $w_{i}=x_{i}+y_{i} \ (\bmod\ 2)$. \end{defn} As chocolate bar games are impartial games without draws, only two outcome classes are possible. \begin{defn}\label{NPpositions} $(a)$ A position is called a $\mathcal{P}$-\textit{position}, if it is a winning position for the previous player (the player who just moved), as long as he/she plays correctly at every stage.\\ $(b)$ A position is called an $\mathcal{N}$-\textit{position}, if it is a winning position for the next player, as long as he/she plays correctly at every stage. \end{defn} \begin{defn}\label{defofmexgrundy2} $(i)$ For any position $\mathbf{p}$ of game $\mathbf{G}$, there is a set of positions that can be reached by precisely one move in $\mathbf{G}$, which we denote as \textit{move}$(\mathbf{p})$. \\ $(ii)$ The \textit{minimum excluded value} ($\textit{mex}$) of a set $S$ of non-negative integers is the least non-negative integer that is not in S. \\ $(iii)$ Each position $\mathbf{p}$ of an impartial game has an associated Grundy number, and we denote this as $\mathcal{G}(\mathbf{p})$.\\ The Grundy number is recursively defined by $G(\mathbf{p}) = \textit{mex}\{G(\mathbf{h}): \mathbf{h} \in move(\mathbf{p})\}.$ \end{defn} \begin{theorem}\label{theoremofsumg2} For any position $\mathbf{p}$ of the game, $\mathcal{G}(\mathbf{p}) =0$ if and only if $\mathbf{p}$ is a $\mathcal{P}$-position. \end{theorem} The original two-dimensional chocolate bar introduced by Robin \cite{robin} is the chocolate shown on the left-hand side in Figure \ref{two2dchoco}. Because the horizontal and vertical grooves are independent, an $m \times n$ rectangular chocolate bar is equivalent to the game of Nim, which includes heaps of $m-1$ and $n-1$ stones, respectively. Therefore, the chocolate $6 \times 4$ bar game shown on the left-hand side of Figure \ref{two2dchoco} is mathematically the same as Nim, which includes heaps of $5$ and $3$ stones, respectively. It is well known that the Grundy number of the Nim game with heaps of $m-1$ stones and $n-1$ stones is $(m-1) \oplus (n-1)$; therefore, the Grundy number of the $m \times n$ rectangular bar is $(m-1) \oplus (n-1)$. Robin \cite{robin} also presented a cubic chocolate bar, as shown on the left-hand side of Figure \ref{two3dchoco}. It can be easily determined that this $5 \times 5 \times 5$ three-dimensional chocolate bar is mathematically the same as Nim with heaps of $4$, $4$, and $4$ stones, and the Grundy number of this cuboid bar is $4 \oplus 4 \oplus 4$. It is then natural to ask the following question. \vspace{0.5cm} \noindent \bf{Question 1. \ }\normalfont \textit{What is the necessary and sufficient condition whereby a three-dimensional chocolate bar may have a Grundy number $(p-1) \oplus (q-1) \oplus (r-1)$, where $p, q$, and $r$ are the length, height, and width of the bar, respectively?} \normalfont \vspace{0.5cm} Although the authors answered this question for two-dimensional chocolate bars in \cite{jgame} and for the three-dimensional case in \cite{ns3d}, the results of these studies are omitted here. When the Grundy number of a chocolate bar with $p, q$, and $r$ as the length, height, and width, respectively, is $(p-1) \oplus (q-1) \oplus (r-1)$, the position is a $\mathcal{P}$-position if and only if $(p-1) \oplus (q-1) \oplus (r-1)=0$ for the chocolate bar. Therefore, it is natural to ask the following question. \vspace{0.5cm} \noindent \bf{Question 2. \ }\normalfont \textit{Under what condition may a three dimensional chocolate bar with $p, q$, and $r$ as the length, height, and width, respectively, have a $\mathcal{P}$-position if and only if $(p-1) \oplus (q-1) \oplus (r-1)=0$?} \normalfont \vspace{0.5cm} In the remainder of this paper, we present a sufficient condition for which Question 2 may be answered. Determining the necessary and sufficient conditions for this question is a very difficult unsolved problem considered by the authors. We suppose that the difficulty of presenting the necessary and sufficient conditions arises from the fact that there are many kinds of sufficient conditions. For more information, see Theorems \ref{theoremforoddk} in Section \ref{sub4mone} and Conjecture \ref{theoremmanabe} in Section \ref{others}. We now define a three-dimensional chocolate bar. \begin{defn}\label{definitionoffunctionf3d} Suppose that $f(u,v)\in Z_{\geq0}$ for $u,v \in Z_{\geq0}$. $f$ is said to monotonically increase if $f(u,v) \leq f(x,z)$ for $x,z,u,v \in Z_{\geq0}$ with $u \leq x$ and $v \leq z$. \end{defn} \begin{defn}\label{defofbarwithfunc3d} Let $f$ be the monotonically increasing function in Definition \ref{definitionoffunctionf3d}.\\ Let $x,y,z \in Z_{\geq0}$. The three-dimensional chocolate bar comprises a set of $1 \times 1 \times 1$ boxes. For $u,w \in Z_{\geq0}$ such that $u \leq x$ and $w \leq z$, the height of the column at position $(u,w)$ is $ \min (f(u,w),y) +1$. There is a bitter box at position $(0,0)$. We denote this chocolate bar as $CB(f,x,y,z)$. Note that $x+1, y+1$, and $z+1$ are the length, height, and width of the bar, respectively. \end{defn} \begin{exam} Here, we let $f(x,z)$ $= \lfloor \frac{x+z}{3}\rfloor$, where $ \lfloor \ \rfloor$ is the floor function, and we present several examples of $CB(f,x,y,z)$. \begin{pict} \centering \includegraphics[height=3cm]{coordinate3d.eps} \label{coordinate3d} \end{pict} \begin{pict} \begin{tabular}{cc} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=2.5cm]{f14310.eps} $CB(f,14,3,10)$ \label{f14310} \end{minipage} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=2.5cm]{f9310.eps} $CB(f,9,3,10)$ \label{f9310} \end{minipage} \end{tabular} \end{pict} \begin{pict} \begin{tabular}{cc} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=3cm]{f1367.eps} $CB(f,13,6,7)$ \label{f1367} \end{minipage} \begin{minipage}{.33\textwidth} \centering \includegraphics[height=2.2cm]{f437.eps} $CB(f,4,3,7)$ \label{f437} \end{minipage} \end{tabular} \end{pict} \end{exam} Next, we define $move_f(\{x, y, z\})$ in Definition \ref{movefor3dimension}. $move_f(\{x, y, z\})$ is a set that contains all of the positions that can be reached from position $\{x, y, z\}$ in one step (directly). \begin{defn}\label{movefor3dimension} For $x,y,z \in Z_{\ge 0}$, we define \begin{align} & move_f(\{x,y,z\})=\{\{u,\min(f(u,z),y),z \}:u<x \} \cup \{\{x,v,z \}:v<y \} \nonumber \\ & \cup \{ \{x,\min(y, f(x,w) ),w \}:w<z \}, \text{ where \ } u,v,w \in Z_{\ge 0}.\nonumber \end{align} \end{defn} \begin{rem} Definition \ref{movefor3dimension} shows how to reduce the coordinates of the chocolate bar by cutting, and in Example \ref{chococute}, we provide concrete examples of reducing the coordinates. \end{rem} \section{When $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k =4m+3$}\label{sub4mone} Let $k = 4m + 3$ for some $m \in Z_{\geq 0}$. Let $x = \sum_{i=0}^n x_i 2^i$, $y = \sum_{i=0}^n y_i 2^i$ and $z = \sum_{i=0}^n z_i 2^i$ for some $n \in Z_{\ge 0}$ and $x_i,y_i,z_i \in \{0,1\}$. Throughout this section, we assume that \begin{equation} f(x,z) = \lfloor \frac{x+z}{k}\rfloor. \end{equation} Before we prove several lemmas, we first consider the procedures provided in Example \ref{chococute}. Although this example is lengthy, the proofs of the lemmas are difficult to understand without first considering this example. \begin{exam}\label{chococute} Let $f(x,z)$ $= \lfloor \frac{x+z}{3}\rfloor$. \\ $(i)$ We begin with the chocolate bar shown in Figure \ref{f14310}. If the first coordinate $x = 14$ is reduced to $u=9$ by cutting the chocolate bar $CB(f,14,3,10)$ shown in Figure \ref{f14310}, by Definition \ref{movefor3dimension} the second coordinate will be $\min(f(u,z),y)$ $= \min(f(9,10),3)$ $= \min(\lfloor \frac{19}{3}\rfloor,3)$ $= \min(6,3)$ = 3. Therefore, we can reduce $x = 14$ to $u=9$ without affecting the second coordinate $3$, which is the height of the chocolate bar, and we obtain the chocolate bar $CB(f,9,3,10)$ shown in Figure \ref{f9310} (i.e., $\{9,3,10\} \in move_f(\{14,3,10\})$). \begin{table} \begin{tabular}{cc} \begin{minipage}{.5\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline \text{ \ } & $x=14$ & $y=3$ & $z=10$ \\ \hline $2^3=8$ & $x_3=1$ & $y_3=0$ & $z_3=1$ \\ \hline $2^2=4$ & $x_2=1$ & $y_2=0$ & $z_2=0$ \\ \hline $2^1=2$ & $x_1=1$ & $y_1=1$ & $z_1=1$ \\ \hline $2^0=1$ & $x_0=0$ & $y_0=1$ & $z_0=0$ \\ \hline \end{tabular} \caption{$CB(f,14,3,10)$ }\label{3D1981} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline \text{ \ } & $u=9$ & $y=3$ & $z=10$ \\ \hline $2^3=8$ & $u_3=1$ & $y_3=0$ & $z_3=1$ \\ \hline $2^2=4$ & $u_2=0$ & $y_2=0$ & $z_2=0$ \\ \hline $2^1=2$ & $u_1=0$ & $y_1=1$ & $z_1=1$ \\ \hline $2^0=1$ & $u_0=1$ & $y_0= 1$ &$z_0=0$ \\ \hline \end{tabular} \caption{$CB(9,3,10)$ }\label{3D19} \end{minipage} \end{tabular} \end{table} \noindent $(ii)$ We begin with the chocolate bar in Figure \ref{f1367}. If the first coordinate $x = 13$ is reduced to $u=4$ by cutting the chocolate bar $CB(f,13,6,7)$ in Figure \ref{f1367} by Definition \ref{movefor3dimension}, the second coordinate will be $\min(f(u,z),y)$ $= \min(f(4,7),6)$ $= \min(\lfloor \frac{11}{3}\rfloor,6) =\min(3,6) =3$. Therefore, the second coordinate, $6$, which is the height of the chocolate bar, will be reduced to $3$. Then, we obtain the chocolate bar shown in Figure \ref{f437} (i.e., $\{4,3,7\} \in move_f(\{13,6,7\})$). \begin{table} \begin{tabular}{cc} \begin{minipage}{.5\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline \text{ \ } & $x=13$ & $y=6$ & $z=7$ \\ \hline $2^3=8$ & $x_3=1$ & $y_3=0$ & $z_3=0$ \\ \hline $2^2=4$ & $x_2=1$ & $y_2=1$ & $z_2=1$ \\ \hline $2^1=2$ & $x_1=0$ & $y_1=1$ & $z_1=1$ \\ \hline $2^0=1$ & $x_0=1$ & $y_0=0$ & $z_0=1$ \\ \hline \end{tabular} \caption{$CB(f,13,6,7)$ }\label{3D1982} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tabular}{|c|c|c|c|} \hline \text{ \ } & $x=4$ & $y=3$ & $z=7$ \\ \hline $2^3=8$ & $u_3=0$ & $v_3=0$ & $z_3=0$ \\ \hline $2^2=4$ & $u_2=1$ & $v_2=0$ & $z_2=1$ \\ \hline $2^1=2$ & $u_1=0$ & $v_1=1$ & $z_1=1$ \\ \hline $2^0=1$ & $u_0=0$ & $v_0=1$ & $z_0=1$ \\ \hline \end{tabular} \caption{$CB(f,4,3,7)$ }\label{3D19b} \end{minipage} \end{tabular} \end{table} \noindent $(iii)$ The procedures presented in $(i)$ and $(ii)$ are good examples of moving to a position whose nim-sum is 0 from a position whose nim-sum is not 0. In $(i)$, $14 \oplus 3 \oplus 10 = 7$, and suppose that the player wants to move to a position whose nim-sum is 0. First, let $u_3= x_3 = 1$. Next, reduce $x_2 =1$ to $u_2 = 0$. Note that \begin{equation}\label{xbigeru0} x = \sum_{i=0}^3 x_i 2^i = 2^3 + 2^2 + 2 > 2^3 + 0 \times 2^2 + u_1 \times 2 + u_0 = \sum_{i=0}^3 u_i 2^i =u \end{equation} regardless of the values of $u_1, u_0$. Then, reduce $x_1$ to $u_1=0$ and increase $x_0 = 0$ to $u_0 =1$. Note that by considering (\ref{xbigeru0}), one can choose any value for $u_1, u_0$. Then, we obtain the position $\{9,3,10\}$ such that $9 \oplus 3 \oplus 10 = 0$. In $(ii)$, $13 \oplus 6 \oplus 7 = 12$, and suppose that it is desired to move to a position whose nim-sum is 0. First, we have to reduce $x_3=1$ to $u_3=0$. Because $\{x_2,y_2,z_2\} = \{1,1,1\}$ and $1 \oplus 1 \oplus 1 \ne 0 \ (\mod 2)$, we may let $\{u_2,y_2,z_2\} = \{0,1,1\}$ or $\{u_2,v_2,z_2\} = \{1,0,1\}$ by reducing $y$ to $v$. Note that once we reduce $x$, we cannot reduce $z$. If \begin{equation}\label{firstcase} \{u_2,y_2,z_2\} = \{0,1,1\}, \end{equation} we have \begin{align} & f( u, z )\\ \nonumber & = f( \sum_{i=0}^3 u_i 2^i, \sum_{i=0}^3 z_i 2^i )\\ \nonumber & = f(0 \times 2^3 + 0 \times 2^2 + u_1 2^1 + u_0 2^0, 7) \\ \nonumber & = \lfloor \frac{7+u_1 2^1 + u_0 2^0}{3}\rfloor \leq \lfloor \frac{10}{3}\rfloor=3, \label{xbigeru} \end{align} regardless of the values of $u_1, u_0$. We then have $f(u,z) < 4 = y_2 2^2 \leq y$. Therefore, by Definition \ref{movefor3dimension} (\ref{firstcase}) leads to a contradiction. We should then let \begin{equation}\label{secondcase} \{u_2,v_2,z_2\} = \{1,0,1\}, \end{equation} by simultaneously reducing $x$ and $y$. Next, we let $\{u_1,v_1,z_1\} = \{0,1,1\}$ or $\{u_1,v_1,z_1\} = \{1,0,1\}$. If $\{u_1,v_1,z_1\} = \{1,0,1\}$, by (\ref{secondcase}) \begin{align} & f(u, z)\\ \nonumber & = f( \sum_{i=0}^3 u_i 2^i, \sum_{i=0}^3 z_i 2^i )\\ \nonumber & = f( 0 \times 2^3 + 2^2 + 2^1 + u_0 2^0, 7) \\ \nonumber & = \lfloor \frac{13 + u_0 2^0}{3}\rfloor \geq \lfloor \frac{13}{3}\rfloor \\ \nonumber & =4 > 1 \geq \sum_{i=0}^3 v_i 2^i = 0 \times 2^3 + 0 \times 2^2 + 0 \times 2 + v_0 =v, \label{xbigeru} \end{align} and we obtain \begin{equation}\label{ysmallth} f(u,z) > v. \end{equation} When we reduce $x$ to $u$ and $y$ to $v$, by definition \ref{movefor3dimension} we have \begin{equation} v = \min(f(u,z),y), \nonumber \end{equation} and this contradicts (\ref{ysmallth}). Therefore, let $\{u_1,v_1,z_1\} = \{0,1,1\}$. Using similar reasoning, we let $\{u_0,v_0,z_0\} = \{0,1,1\}$. We then obtain the position $\{4,3,7\}$ such that $4 \oplus 3 \oplus 7 = 0$ and $v = 3 = \lfloor \frac{4+7}{3}\rfloor = f(u,z).$ \end{exam} We define \begin{equation} S_t = \sum_{i=n-t}^n (x_i + z_i - ky_i) 2^i, \label{defofs}, \end{equation} for $t = 0,1, \cdots, n$. \begin{lemma}\label{lemmaforf} We have the following relationships between $f(x,z)$ and $S_n$.\\ $(a)$ \begin{equation} y = f(x,z) \nonumber \end{equation} if and only if $0 \leq S_n < k$.\\ $(b)$ \begin{equation} y > f(x,z) \nonumber \end{equation} if and only if $S_n < 0$.\\ $(c)$ \begin{equation} y < f(x,z) \nonumber \end{equation} if and only if $S_n \geq k$. \end{lemma} \begin{proof} First, note that $S_n = \sum_{i=0}^n (x_i + z_i - ky_i) 2^i = x + z - ky$.\\ $(a)$ \begin{equation} y = f(x,z) = \lfloor \frac{x+z}{k}\rfloor \nonumber \end{equation} if and only if $y \leq \frac{x+z}{k} < y+1$ if and only if $0 \leq S_{n}= x+z-ky < k$.\\ $(b)$ \begin{equation} y > f(x,z) = \lfloor \frac{x+z}{k}\rfloor \nonumber \end{equation} if and only if $\frac{x+z}{k} < y$, which occurs if and only if $ S_{n}= x+z-ky < 0$.\\ We then obtain $(c)$ via $(a)$ and $(b)$. \end{proof} \begin{lemma}\label{lemma01} Let $t \in Z_{\geq 0}$. Suppose that for $i = n,n-1, \cdots, n-t$ \begin{equation} x_i \oplus y_i \oplus z_i=0. \label{oplus01} \end{equation} There then exists an even number $a$ such that \begin{equation} S_t = a 2^{n-t}.\label{evenconditiont} \end{equation} \end{lemma} \begin{proof} Because $k$ is odd, by (\ref{oplus01}) $x_{i} + z_{i}-k y_{i}$ is even for $i = n, n-1, \cdots, n-t$, and therefore we have (\ref{evenconditiont}). \end{proof} \begin{lemma}\label{lemma1} Let $t \in Z_{\geq 0}$. Suppose that for $i = n,n-1, \cdots, n-t$ \begin{equation} x_i \oplus y_i \oplus z_i=0 \label{oplus1} \end{equation} and \begin{equation} S_t < 0.\label{negativeconditiont} \end{equation} Then, for any natural number $j$ such that $j > t$, \begin{equation} S_j < 0.\nonumber \end{equation} \end{lemma} \begin{proof} By Lemma \ref{lemma01}, (\ref{oplus1}), and (\ref{negativeconditiont}), \begin{equation} S_t = a2^{n-t} \label{evencondition} \end{equation} for some even number $a$ such that $a \leq -2$. Then, by (\ref{evencondition}), for any natural number $j$ such that $j > t$ the following holds: \begin{align} S_j = & S_t + \sum_{i=n-j}^{n-t-1} (x_i + z_i - ky_i) 2^i \nonumber \\ \leq & S_t + 2 \times \sum_{i=0}^{n-t-1} 2^i \nonumber \\ \leq & (-2)2^{n-t} + 2 \times (2^{n-t} -1)= -2 < 0. \nonumber \end{align} \end{proof} \begin{lemma}\label{lemma1b} Let $t \in Z_{\geq 0}$. Suppose that for $i = 0,1, \cdots, n-t$ \begin{equation} x_i \oplus y_i \oplus z_i=0 \label{oplus1b} \end{equation} and \begin{equation} y \leq f(x,z).\label{ysmallerthanxz} \end{equation} Then, \begin{equation} S_t \geq 0. \label{bigthan0} \end{equation} \end{lemma} \begin{proof} If \begin{equation} S_t < 0, \label{smalthan0} \end{equation} by (\ref{oplus1b}) and Lemma \ref{lemma1}, we have \begin{equation} S_n< 0. \nonumber \end{equation} Then by $(b)$ of Lemma \ref{lemmaforf}, we have \begin{equation} y > f(x,z), \nonumber \end{equation} and this contracts (\ref{ysmallerthanxz}). Therefore, (\ref{smalthan0}) is not true, and we have (\ref{bigthan0}). \end{proof} \begin{lemma}\label{lemma2} Let $t \in Z_{\geq 0}$. If \begin{equation} S_t \geq k2^{n-t},\label{greaterthank} \end{equation} then, for any natural number $j$ such that $j > t$, \begin{equation} S_j \geq k2^{n-j}. \label{greaterthanj} \end{equation} \end{lemma} \begin{proof} $S_j$ will be smallest when $\{x_i,y_i,z_i\}$ = $\{0,1,0\}$ for $i = n-t-1,n-t-2, \cdots, n-j.$ Therefore, it is sufficient to prove (\ref{greaterthanj}) for this case. By (\ref{greaterthank}), for any natural number $j$ such that $j > t$, \begin{align} S_j = & S_t + \sum_{i=n-j}^{n-t-1} (x_i + z_i - ky_i) 2^i \nonumber \\ \geq & S_t - k(2^{n-t-1}+2^{n-t-2} + \cdots +2^{n-j}) \nonumber \\ > & k2^{n-t} - k(2^{n-t}-2^{n-j}) = k2^{n-j}. \nonumber \end{align} \end{proof} \begin{lemma}\label{lemma3} Let $t \in Z_{\geq 0}$. Suppose that \begin{equation} 0 \leq S_t \leq 2m \times 2^{n-t}.\nonumber \end{equation} Then, we have the following cases $(a)$ and $(b)$.\\ $(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,1,0\}$ or $\{0,1,1\}$, then \begin{equation} S_{t+1}< 0.\nonumber \end{equation} $(b)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,0,1\}$ or $\{0,0,0\}$, then \begin{equation} 0 \leq S_{t+1} < k \times 2^{n-t-1}.\nonumber \end{equation} \end{lemma} \begin{proof} $(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,1,0\}$ or $\{0,1,1\}$, \begin{align} S_{t+1} = & S_t +2^{n-t-1}-k \times 2^{n-t-1} \nonumber \\ \leq & 2m \times 2^{n-t} + 2^{n-t-1}-(4m+3)2^{n-t-1} \nonumber \\ = & -2 \times 2^{n-t-1} <0.\nonumber \end{align} $(b)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{0,0,0\}$, \begin{align} 0 & \leq S_{t+1} \nonumber \\ & = S_t \leq 4m \times 2^{n-t-1} \nonumber \\ & < k \times 2^{n-t-1}. \nonumber \end{align} If \begin{equation} \{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,0,1\}, \end{equation} \begin{align} 0 & \leq S_{t+1} \nonumber \\ & = S_t + 2 \times 2^{n-t-1} \nonumber \\ & \leq (4m+2)2^{n-t-1} \nonumber \\ & < k \times 2^{n-t-1}. \nonumber \end{align} \end{proof} \begin{lemma}\label{lemma4} Let $t \in Z_{\geq 0}$. Suppose that \begin{equation} (2m+2) \times 2^{n-t} \leq S_t < k \times 2^{n-t} \label{greaterthank22b} \end{equation} and \begin{equation} x_i \oplus y_i \oplus z_i=0 \label{oplus001} \end{equation} for $i = n,n-1, \cdots, n-t$. Then, we have the following cases $(a)$ and $(b)$.\\ $(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\} = \{1,1,0\}$ or $\{0,1,1\}$, then, \begin{equation} 0 \leq S_{t+1} < k \times 2^{n-t-1}.\nonumber \end{equation} $(b)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\}= \{1,0,1\}$ or $\{0,0,0\}$, then, \begin{equation} S_{t+1} \geq k \times 2^{n-t-1}.\nonumber \end{equation} \end{lemma} \begin{proof} \noindent $(a)$ If $\{x_{n-t-1},y_{n-t-1},z_{n-t-1}\}= \{1,1,0\}$ or $\{0,1,1\}$, then \begin{align} S_{t+1} & = S_t +2^{n-t-1}-k2^{n-t-1} \nonumber \\ & = S_t - (4m+2)2^{n-t-1}. \nonumber \end{align} By (\ref{greaterthank22b}) \begin{align} & (2m+2)\times 2^{n-t} - (4m+2)2^{n-t-1} \nonumber \\ & \leq S_{t+1} = S_t -(4m+2)2^{n-t-1} \nonumber \\ & < (4m+3)2^{n-t} -(4m+2)2^{n-t-1}. \nonumber \end{align} Therefore, \begin{equation} 2 \times 2^{n-t-1} \leq S_{t+1} < (4m+4)2^{n-t-1}. \label{eqa1} \end{equation} By (\ref{oplus001}) and Lemma \ref{lemma01}, $S_{t+1} = a2^{n-t-1}$ for some even integer $a$, and therefore by (\ref{eqa1}) we have \begin{equation} 0< S_{t+1} \leq (4m+2)2^{n-t-1} < k\times 2^{n-t-1}. \nonumber \end{equation} $(b)$ If $(x_{n-t-1},x_{n-t-1},x_{n-t-1}) = \{1,0,1\}$ or $\{0,0,0\}$, then by (\ref{greaterthank22b}) \begin{equation} k \times 2^{n-t-1} < (4m+4)2^{n-t-1} \leq S_t \leq S_{t+1}. \nonumber \end{equation} \end{proof} \begin{lemma}\label{fromNtoPforh} We assume that \begin{equation} x \oplus y \oplus z \neq 0 \label{nimsumno0} \end{equation} and \begin{equation}\label{inequalityyz} y \leq f(x,z). \end{equation} Then, at least one of the following statements is true:\\ $(1)$ $u\oplus y \oplus z= 0$ for some $u\in Z_{\geq 0}$ such that $u<x$;\\ $(2)$ $u \oplus v \oplus z= 0$ for some $u, v \in Z_{\geq 0}$ such that $u < x,v < y$ and $v=f(u,z)$;\\ $(3)$ $x\oplus v\oplus z= 0$ for some $v\in Z_{\geq 0}$ such that $v<y$;\\ $(4)$ $x\oplus y\oplus w= 0$ for some $w\in Z_{\geq 0}$ such that $w<z$ and $y \leq f(x,w)$;\\ $(5)$ $x\oplus v\oplus w= 0$ for some $v,w \in Z_{\geq 0}$ such that $v<y,w <z$ and $v=f(x,w)$.\\ \end{lemma} \begin{proof} Let $x= \sum\limits_{i = 0}^n {{x_i}} {2^i}$ and $y= \sum\limits_{i = 0}^n {{y_i}} {2^i}$, and $z= \sum\limits_{i = 0}^n {{z_i}} {2^i}$. If $n=0$, we have $x,z \leq 1$, and $y \leq f(x,z) = \lfloor \frac{x+z}{k}\rfloor = 0$. Then, by (\ref{nimsumno0}), we have $\{x,y,z\} = \{1,0,0\}$ or $\{0,0,1\}$. In this case we obtain $(1)$ for $\{u,y,z\} = \{0,0,0\}$ or $(4)$ for $\{x,y,w\} = \{0,0,0\}$ by reducing $x=1$ to $u=0$ or reducing $z=1$ to $w =0$. Next, we assume that $n \geq 1$ and that there exists a non-negative integer $t$ such that \begin{equation}\label{zerotonminusut1} x_i \oplus y_i \oplus z_i = 0 \end{equation} for $i = n,n-1,...,n-t+1$ and \begin{equation}\label{xyznminush} x_{n-t} \oplus y_{n-t} \oplus z_{n-t} \neq 0. \end{equation} Let $S_k = \sum_{i=n-k}^n (x_i + z_i - ky_i) 2^i$ for $k=0,1,\cdots ,t-1$, and we may then define $S_k$ for $k=t,t+1,\cdots ,n$. By (\ref{zerotonminusut1}), (\ref{inequalityyz}), and Lemma \ref{lemma1b}, we have \begin{equation}\label{xyznminush22} S_{t-1} \geq 0. \end{equation} We then have three cases.\\ \underline{Case $(1)$} Suppose that $\{x_{n-t},y_{n-t},z_{n-t}\}=\{1,0,0\}$. We reduce $x$ to $u$ and for $i = 0,1, \cdots, t-1$ let \begin{equation} u_{n-i} = x_{n-i}, \nonumber \end{equation} and we define $u_{n-i}$ for $i = t,t+1, \cdots, n$ using an inductive method with the following steps $[I]$ and $[II]$.\\ Step $[I]$ Let $u_{n-t}=0$. Then, \begin{equation} u= \sum\limits_{i = 0}^n {{u_i}} {2^i}< \sum\limits_{i = 0}^n {{x_i}} {2^i}=x.\nonumber \end{equation} Because $u_{n-t}=0$ and $y_{n-t}=z_{n-t}=0$, by (\ref{xyznminush22}) we have \begin{equation}\label{xyznminush22b} S_{t} = S_{t-1}+(u_{n-t}+y_{n-t}-kz_{n-t})2^{n-t} = S_{t-1} \geq 0.\nonumber \end{equation} We then consider two subcases $(1.1)$ and $(1.2)$ according to the value of $S_{t}$.\\ \underline{Subcase $(1.1)$} We suppose that \begin{equation} 0 \leq S_{t} \leq 2m \times 2^{n-t}.\label{from0to2t} \end{equation} We then have two subsubcases for two possible values of $z_{n-t-1}$. \\ \underline{Subsubcase $(1.1.1)$} Suppose that $z_{n-t-1} = 0$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\ \underline{Subsubsubcase $(1.1.1.1)$} If $y_{n-t-1} = 0$, let \begin{equation} \{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{0,0,0\} \nonumber \end{equation} and \begin{equation} S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})2^{n-t-1}. \nonumber \end{equation} Then, by Lemma \ref{lemma3} and (\ref{from0to2t}) \begin{equation}\label{condition01} 0 \leq S_{t+1} < k2^{n-t-1}. \end{equation} Then, we begin Step $[II]$ with (\ref{condition01}) while knowing that $y$ has not been reduced.\\ \underline{Subsubsubcase $(1.1.1.2)$} If $y_{n-t-1} = 1$, let $v_{n-t-1}=0<y_{n-t-1}$. Then, we have \begin{equation} \sum\limits_{i = 0}^n {{v_i}} {2^i} < \sum\limits_{i = 0}^n {{y_i}} {2^i} \nonumber \end{equation} for any values of $v_{i}$ for $i = 0,1, \cdots, n-t-1$. In this subsubsubcase, we reduce $y$ to $v$ by reducing $x$ to $u$. For a concrete example of reducing $y$ to $v$ by reducing $x$ to $u$, see $(ii)$ and $(iii)$ in Example \ref{chococute}. Let \begin{equation} \{u_{n-t-1}, v_{n-t-1},z_{n-t-1}\}=\{0,0,0\} \nonumber \end{equation} and \begin{equation} S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-kv_{n-t-1})2^{n-t-1},\nonumber \end{equation} then by Lemma \ref{lemma3} and (\ref{from0to2t}) \begin{equation} 0 \leq S_{t+1} < k2^{n-t-1}.\label{condition02} \end{equation} Then, we begin Step $[II]$ with (\ref{condition02}) while knowing that we have reduced $y$ to $v$.\\ \underline{Subsubcase $(1.1.2)$} Suppose that $z_{n-t-1} = 1$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\ \underline{Subsubsubcase $(1.1.2.1)$} If $y_{n-t-1} = 0$, let \begin{equation} \{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{1,0,1\} \nonumber \end{equation} and \begin{equation} S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})2^{n-t-1}, \nonumber \end{equation} then, by Lemma \ref{lemma3} and (\ref{from0to2t}) \begin{equation} 0 \leq S_{t+1} < k 2^{n-t-1}.\label{condition03} \end{equation} Then, we begin Step $[II]$ with (\ref{condition03}) while knowing that $y$ has not been reduced.\\ \underline{Subsubsubcase $(1.1.2.2)$} If $y_{n-t-1} = 1$, let $v_{n-t-1}=0<y_{n-t-1}$. Then, we have \begin{equation} v=\sum\limits_{i = 0}^n {{v_i}} {2^i} < \sum\limits_{i = 0}^n {{y_i}} {2^i}=y \nonumber \end{equation} for any values of $v_{i}$ for $i = 0,1, \cdots, n-t-1$. In this subsubsubcase, we reduce $y$ to $v$ by reducing $x$ to $u$. For a concrete example of reducing $y$ to $v$ by reducing $x$ to $u$, see $(ii)$ and $(iii)$ in Example \ref{chococute}. Then, let \begin{equation} \{u_{n-t-1}, v_{n-t-1},z_{n-t-1}\}=\{1,0,1\} \nonumber \end{equation} and \begin{equation} S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-kv_{n-t-1})2^{n-t-1}, \nonumber \end{equation} then, by Lemma \ref{lemma3} and (\ref{from0to2t}) \begin{equation} 0 \leq S_{t+1} < k 2^{n-t-1}. \label{condition04} \end{equation} Then, we begin Step $[II]$ with (\ref{condition04}) while knowing that we have reduced $y$ to $v$.\\ \underline{Subcase $(1.2)$} We suppose that \begin{equation} (2m+2)2^{n-t} \leq S_{t} < k2^{n-t}.\label{biggerth2m2} \end{equation} We have two subsubcases for two possible values of $z_{n-t-1}$. \\ \underline{Subsubcase $(1.2.1)$} Suppose that $z_{n-t-1} = 0$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\ \underline{Subsubsubcase $(1.2.1.1)$} If $y_{n-t-1} = 1$, let \begin{equation} \{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{1,1,0\} \nonumber \end{equation} and \begin{equation} S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})2^{n-t-1},\nonumber \end{equation} then, by Lemma \ref{lemma4} and (\ref{biggerth2m2}) \begin{equation} 0 \leq S_{t+1} < k2^{n-t-1}.\label{condition05} \end{equation} Then, we begin Step $[II]$ with (\ref{condition05}) while knowing that $y$ has not been reduced.\\ \underline{Subsubsubcase $(1.2.1.2)$} If $y_{n-t-1} = 0$, let \begin{equation} \{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{0,0,0\}.\nonumber \end{equation} By Lemma \ref{lemma4} and (\ref{biggerth2m2}) \begin{equation} S_{t+1} \geq k2^{n-t-1}.\label{condition06} \end{equation} Then, we begin Step $[II]$ with (\ref{condition06}) while knowing that $y$ has not been reduced.\\ \underline{Subsubcase $(1.2.2)$} Suppose that $z_{n-t-1} = 1$. We have two subsubsubcases for two possible values of $y_{n-t-1}$. \\\\ \underline{Subsubsubcase $(1.2.2.1)$} If $y_{n-t-1} = 1$, let \begin{equation} \{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{0,1,1\}.\nonumber \end{equation} and \begin{equation} S_{t+1}= S_{t}+(u_{n-t-1}+z_{n-t-1}-ky_{n-t-1})^{n-t-1},\nonumber \end{equation} then, by Lemma \ref{lemma4} and (\ref{biggerth2m2}) \begin{equation} 0 \leq S_{t+1} < k2^{n-t-1}.\label{condition07} \end{equation} Then, we begin Step $[II]$ with (\ref{condition07}) and the fact that $y$ has not been reduced.\\ \underline{Subsubsubcase $(1.2.2.2)$} If $y_{n-t-1} = 0$, let \begin{equation} \{u_{n-t-1}, y_{n-t-1},z_{n-t-1}\}=\{1,0,1\}.\nonumber \end{equation} By Lemma \ref{lemma4} and (\ref{biggerth2m2}) \begin{equation} S_{t+1} \geq k 2^{n-t-1}.\label{condition08} \end{equation} Then, we begin Step $[II]$ with (\ref{condition08}) while knowing that $y$ has not been reduced.\\ \underline{Case $(2)$} We suppose that $\{x_{n-t},y_{n-t},z_{n-t}\}=\{0,0,1\}$. Then, we can use the same method used for Case $(1)$.\\ \underline{Case $(3)$} We suppose that $y_{n-t}=1$. Let $v_{n-t}=0 < y_{n-t}$ and $v_{i}=x_{i}+z_{i}$ (mod 2) for $i=n-t-1, \cdots, 0$. Then, we have $x\oplus v\oplus z= 0$ and $v < y \leq f(x,z)$, and we have $(3)$ of this lemma. In this case, we do not need Step $[II]$\\ Step $[II]$. We have two cases.\\ \underline{Case $(1)$} This is a sequel to Case $(1)$ of Step $[I]$. Here, the procedure consists of three subcases.\\ \underline{Subcase $(1.1)$} Suppose that $S_{t+1} \geq k 2^{n-t-1}$. In this case, $y$ is not reduced to $v$ in the last procedure, i.e., Step $[I]$. By Lemma \ref{lemma2}, we let $u_{i}=y_i + z_i \ (\mod 2)$ for $i=n-t-2, \cdots, 0$ without affecting the values of $y_{i}$ for $i=n-t-2, \cdots, 0$. Then, we have $(1)$ for this lemma.\\ \underline{Subcase $(1.2)$} Suppose that $0 \leq S_{t+1} < k 2^{n-t-1}$ and $y$ was reduced to $v$ in Step $[I]$. Then, we choose the values of $u_{n-i}, v_{n-i}$ for $i = t+2, t+3, \cdots, n$ such that $0 \leq S_{i} < k 2^{n-i}$ by the following $(a)$ or $(b)$.\\ $(a)$ For $i \geq t+1$, if $0 \leq S_{i} < 2m \times 2^{n-i}$, then we let $\{u_{n-i-1},v_{n-i-1},z_{n-i-1} \}$ $= \{0,0,0\}$ or $\{1,0,1\}$ when $z_{n-i-1}=0$ or $z_{n-i-1}=1$, respectively. Then, by Lemma \ref{lemma3}, we have $0 \leq S_{i+1} < k 2^{n-i-1}$.\\ $(b)$ For $i \geq t+1$, if $ S_{i} \geq (2m+2) \times 2^{n-i}$, then we let $\{u_{n-i-1},v_{n-i-1},z_{n-i-1} \}$ $= \{1,1,0\}$ or $\{0,1,1\}$ when $z_{n-i-1}=0$ or $z_{n-i-1}=1$, respectively. Then, by Lemma \ref{lemma4}, we have $0 \leq S_{i+1} < k 2^{n-i-1}$. Therefore, for $i = t+2, \cdots ,$ we have $0 \leq S_{i} < k 2^{n-i}$, and finally we have $0 \leq S_{n} < k 2^{n-n}=k$. We then have $v = f(u,z)$ and $u \oplus v \oplus z = 0$; therefore, we have $(5)$ of this lemma.\\ \underline{Subcase $(1.3)$} Suppose that $0 \leq S_{t+1} < k 2^{n-t-1}$ and $y$ was not reduced to $v$ during the last procedure. In this case, we use the same method as in step $[I]$. \\ \underline{Case $(2)$} This is a sequel to Case $(2)$ of Step $[I]$. Then, we can use the same method used for Case $(1)$ of Step $[I]$. \end{proof} \begin{lemma}\label{fromPtoNforh} We assume that $x \oplus y \oplus z = 0$ and \begin{equation}\label{inequalityyz2} y \leq f(x,z). \end{equation} Then, the following hold:\\ $(1)$ $u\oplus y \oplus z \neq 0$ for any $u\in Z_{\geq 0}$ such that $u<x$;\\ $(2)$ $u \oplus v \oplus z \neq 0$ for any $u, v \in Z_{\geq 0}$ such that $u < x,v < y$, and $v=f(u,z)$;\\ $(3)$ $x\oplus v\oplus z \neq 0$ for any $v\in Z_{\geq 0}$ such that $v<y$;\\ $(4)$ $x\oplus y\oplus w \neq 0$ for any $w\in Z_{\geq 0}$ such that $w<z$ and $y \leq f(x,w)$;\\ $(5)$ $x\oplus v\oplus w \neq 0$ for any $v,w \in Z_{\geq 0}$ such that $v<y,w <z$, and $v=f(x,w)$. \end{lemma} \begin{proof} If $x \oplus y \oplus z = 0$, a positive value of the nim-sum is obtained by changing the value of one of $x,y,z$. Therefore, we have $(1)$, $(3)$, and $(4)$. Next, we prove $(2)$. The only way to have $u \oplus v \oplus z = 0$ for some $u, v \in Z_{\geq 0}$ such that $u < x,v < y$ and \begin{equation}\label{equalityfor} v=f(u,z) \end{equation} is to reduce $\{x_{n-t},y_{n-t},z_{n-t}\} = \{1,1,0\}$ to $\{u_{n-t},v_{n-t},z_{n-t}\} = \{0,0,0\}$. We consider two cases.\\ \underline{Case $(1)$} Suppose that $0 \leq S_{t-1} \leq 2m \times 2^{n-t+1}$. Then, for $\{x_{n-t},y_{n-t},z_{n-t}\} = \{1,1,0\}$ by Lemma \ref{lemma3}, we have $S_{t+1}< 0$. Then, by Lemmas \ref{lemma1} and \ref{lemmaforf}, we have $y > f(x,y)$. This contradicts (\ref{inequalityyz2}).\\ \underline{Case $(2)$} Suppose that $ S_{t-1} \geq (2m+2) \times 2^{n-t+1}$. For $\{u_{n-t},v_{n-t},z_{n-t}\} = \{0,0,0\}$ by Lemma \ref{lemma4}, $S_{t} \geq k \times 2^{n-t}$; therefore, by Lemma \ref{lemma2}, $S_n \geq k$. Using Lemma \ref{lemmaforf}, we then have $y < f(u,v)$. This contradicts (\ref{equalityfor}). Similarly we can prove $(5)$. \end{proof} \begin{theorem}\label{theoremforoddk} Let $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k = 4m+3$. Then, the chocolate bar $CB(f,x,y,z)$ is a $\mathcal{P}$-position if and only if \begin{equation} x \oplus y \oplus z=0. \label{nscondtionfornim0} \end{equation} \end{theorem} \begin{proof} Let $A_k=\{\{x,y,z\}:x\oplus y \oplus z = 0\}$ and $B_k =\{\{x,y,z\}: x\oplus y \oplus z \neq 0\}$. If we begin the game with a position $\{x,y,z\}\in A_{k}$, then using Theorem \ref{fromPtoNforh}, any option leads to a position $\{p,q,r\} \in B_k$. From this position $\{p,q,r\}$ by Theorem \ref{fromNtoPforh}, our opponent can choose an appropriate option that leads to a position in $A_k$. Note that any option reduces some of the numbers in the coordinates. In this way, our opponent can always reach a position in $A_k$, and finally, they win by reaching $\{0,0,0\}\in A_{k}$. Therefore, $A_k$ is the set of $\mathcal{P}$-positions. If we begin the game with a position $\{x,y,z\}\in B_{k}$, then by Theorem \ref{fromNtoPforh}, we can choose an appropriate option that leads to a position $\{p,q,r\}$ in $A_k$. From $\{p,q,r\}$, any option chosen by our opponent leads to a position in $B_k$. In this way, we win the game by reaching $\{0, 0, 0\}$. Therefore, $B_k$ is the set of $\mathcal{N}$-positions.\\ \end{proof} \section{othercases}\label{others} The result shown in Section \ref{sub4mone} depends on the assumption that $k$ is odd, but it seems that a similar result can be proven for an even number $k$ with a restriction on the size of $x,z$. The authors discovered the following conjecture via calculations using the computer algebra system Mathematica, but they have not managed to prove it. \begin{conjecture}\label{theoremmanabe} Let $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k = 2^{a+2}m+2^{a+1}$ and $x,z \leq (2^{2a+2}-2^{a+1})m+2^{2a+1}-1$, where $a,m \in Z_{\ge 0}$. Then, the chocolate bar $CB(f,x,y,z)$ is a $\mathcal{P}$-position if and only if \begin{equation} x \oplus y \oplus z=0. \nonumber \end{equation} \end{conjecture} \begin{rem} If we compare Theorem \ref{theoremforoddk} and Conjecture \ref{theoremmanabe}, it seems very difficult to obtain the necessary and sufficient condition for Question 2. \end{rem} The authors also have the following conjecture that also may be derived via calculations using the computer algebra system Mathematica. \begin{conjecture} Let $f(x,z) = \lfloor \frac{x+z}{k}\rfloor$ for $k = 4m + 1$. Then, the chocolate bar $CB(f,x,y,z)$ is a $\mathcal{P}$-position if and only if \begin{equation} (x+1) \oplus y \oplus (z+1)=0. \nonumber \end{equation} \end{conjecture} \section*{Acknowledgements} We are indebted to Shouei Takahasi and Taishi Aono. Although not the primary authors, their contributions were significant. We would like to thank Editage (www.editage.com) for English language editing. This work was supported by Grant-in-Aid for Scientific Research of Japan.
0.013413
Q:What should I ride? A: Your bike. Your ride should be fully functional and utilitarian. Eschew proprietary spokes and 1300 gram wheelsets. Think Open Pro's laced to Ultegra's in a 3X pattern. Q: Can my Family come? A: sure there is lots of fun stuff to do in the area, but we ask that they not be driving the course and that you not take feed or water from a vehicle unless they are providing it to everyone. Q: What am I going to eat? A: Food. There will be breakfast and dinner available for purchase at all of the locations that we will be at. I would encourage you to support all of the local businesses in this area. During the day its going to be up to you. Plan to always have at least 700 calories on your person at all times. Q: How will I know where to go each day? A: Each morning you will get a cue sheet, and I will make .gpx files and Garmin coordinates available once registration is confirmed. I actually encourage you to use GPS devices, but you should also seriously consider having a copy of the maps for this region as a backup. Q: How will my gear get to the end of the stage each night? A: When you cross the line in the evening, your gear will be at hand. Please be reasonable with what you bring. WE WILL NOT HAUL SPARE BIKES OR WHEELS. Q: What should I have with me during each stage? A: Whatever you need to ride/survive for 100 miles each day. Seriously, this is not a beginners event. If you are not prepared to cope with broken spokes, flats, broken chains, etc, it might be time to reevaluate how you are spending that weekend. There is no mandatory gear that you have to carry, but I would suggest the following: - Extra Spokes and nipples (and the acumen to utilize them on the side of the road) - Extra Derailleur hanger - Extra half links - Schrader to Presta adapter - At least 2 tubes (with long valve stems!) and a real glue patch kit - A hand pump that doesn't suck. I carry CO2 also, but never rely on it solely. - A multi-tool that does everything. - The capacity to carry at least 100 ounces of fluid. - Potable Aqua or some sort of water treatment tablets. You might need to get water out of a lake. Amoebic Dysentery will enable you to shit through a screen door from 10 feet away, but the novelty of this fades quickly. - Food. - A blinky light. - A good disposition and a willingness to help others. If I might be so bold as to quote Hurl, "don't be a dick". A: A lot of things could happen. We really hope that you are able to get to the finish under your own power, but if not you should call the director to let us know what happened. We will not come get you, but will make a effort to make sure that you are not eaten or that your remains make it to your next of kin. Q: Why don't you charge a entry fee to help underwrite this? A: Under US Forest Service "free" permitting, fees are not allowed to be solicited from participants, and participation is limited to 75 or less. Thus, we have to rely on corporate sponsors to help out with incidental costs and expenses. Plus if I start charging you money you will start having expectations, wanting t-shirts, prizes, and it becomes a glorious headache (and nobody wants that). You are up here to have fun, meet people, and ride bikes (far and as fast as you can).
0.027526
[ order viagra no prescription | non prescription viagra canada | get viagra fast | buy generic viagra | mexican viagra | viagra side effects | viagra purchase by phone | viagra oral gel | buy discount viagra online | map of france with cialis | h h order viagra | viagra alternative merck | trying viagra | cheap robert.up2.co.il viagra | buying viagra without perscription | cialis soft pills | viagra 50 mg | viagra side effects | viagra soft tabs | viagra online shop | buy kazazz shopping viagra | vigorex forte | sildenafil citrate 100mg plus | can young men take viagra | viagra mexico | cialis overnight | buy viagra prescription online | lowest cost generic viagra | buy viagra now online | uk viagra sales | viagra women | viagra dosage | viagra injectable | cialis prices | buy cialis fedex shipping | low cost viagra | viagra blog | free sample pack of viagra | viagra india | pharmacie canadian viagra | names of herbal viagra | overseas viagra | cialis professional 100 mg | what if i take too much viagra | professional cialis online | alternative drug new viagra | is viagra legal | pfizer mexico viagra | viagra prices | poker viagra | professional cialis | viagra online without a prescription | mexico viagra | which is better viagra or cialis ]
0.070193
TITLE: Does forcing preserve the least undefinable ordinal from a model of ZFC? QUESTION [6 upvotes]: Let $M$ be a transitive model of ZFC. For convenience let assume that $M$ is countable. Now let us consider the least undefinable ordinal $\vartheta_M$ which is not definable from elements in $M$ and $M$ itself as parameters. For example, if $\alpha$ is the height of $M$ then $\alpha+\alpha$ and $\alpha^2$ are definable from $M$, so they are (strictly) less than $\vartheta_M$. Since there are only countable possible formulas and $|M|$ possible parameters we only have $|M|$ definable ordinals from $M$, so $\vartheta_M<|M|^+$. My question is whether forcing preserves the size of $\vartheta_M$. That is, if $M[G]$ is a generic extension of $M$ then $\vartheta_{M[G]} = \vartheta_M$? I guess that $\vartheta_M$ only depends on the height of $M$. Thanks for any help. I should provide more precise notion of definability. The definition of $\vartheta_M$ I imagine is: for a transitive model $M$, a definable class $C\subseteq M$ (that is, we have a formula $\ulcorner\phi\urcorner$ and parameters $a_1,\cdots, a_n\in M$ such that $x\in C\iff M\models \ulcorner\phi\urcorner(x, a_1,\cdots, a_n)$) and a definable ordering $\prec$ over $C$ well-ordered if for every definable $X\subseteq C$ either $X$ is empty or $X$ have the $\prec$-minimum. $(C, R)$ might not be well-ordered in $V$. However if it is well-ordered then we can find the ordinal isomorphic to $(C, R)$ and we consider the least ordinal $\vartheta_M$ not isomorphic to any $(C, R)$. In that sense I can argue that $\vartheta_M < |M|^+$. REPLY [3 votes]: To clarify, I think that what the OP is asking is: For $M\in V$ a countable transitive model of ZFC, let $\alpha(M)$ be the supremum of the ordinals in $V$ such that $\alpha(M)$ is not definable in $V$ by a first-order formula with parameters from $M\cup\{M\}$. (Note that I write this slightly differently from the OP: the OP asks for the least "undefinable" ordinal, but I think they are tacitly assuming that the "definable" ordinals are closed downwards, which is not at all obvious to me.) Then the question is, if (according to $V$) $N$ is a generic extension of $M$, is $\alpha(N)=\alpha(M)$? (Note that since $M\not\in M[G]$, it's not even obvious that $\alpha(M)$ is "increasing" in $M$! In fact, by modifying the below argument I think we can show that it's not.) If I'm interpreting the question correctly, the answer is no: forcing can definitely change what ordinals are definable in this sense. For example, for $N$ a set model of ZFC (inside $V$), let $\alpha_N$ be the minimum of the ordinals $\alpha$ such that the continuum pattern $$\{i: 2^{\aleph_{\omega_\alpha+i}}=\aleph_{\omega_\alpha+i+1}\}$$ is in $N$. Then there's no reason we can't have a model $M$ such that $\alpha_M=0$, but a forcing extension $M[G]\in V$ such that $\alpha_{M[G]}=\theta_M$ (maybe the continuum patterns in $V$ look Cohen over $M$, and we happen to pick $G$ to match the relevant pattern exactly). Of course, the obvious way to do this involves a terrible $V$, but there's no reason it can't happen. Note that it is crucial in this argument that $G$ be generic over $M$, but not $V$. Indeed, it's not hard to show the following: Let $M$ be a countable transitive model in $V$, $\mathbb{P}\in M$ a forcing notion, and $G$ $\mathbb{P}$-generic over $V$. Then $\alpha^{V[G]}(M[G])=\alpha^V(M)$. Of course, we have to be a bit careful defining "$\alpha^{V[G]}$," but it's not hard.
0.013164
Junior Quantity Surveyor – PQS - Kingsley - £30,000 – £40,000 Salary: Circa £28,000 - £32,000 + Benefits Location: Manchester Web: Contact Name: James Wilson Telephone: 0161 393 9889 About the Company A niche multi-property consultancy has a requirement for a Junior Quantity Surveyor who is currently progressing with their APC diary to join their growing team. The consultancy works across a number of public/private sectors and in recent years have been giving more instructions from clients within the Higher Education sector which you will be heavily involved with. Working closely with Universities across the region in developing exciting new-build developments along with refurbishments & alterations. About the Role The PQS with clear career progression on offer. You will be supported through your APC diary with regular review meetings and an internal APC supervisor, ensuring you have all the required support in order for you to become Chartered and develop your career. The projects you’ll be working in will be varied, working on both pre-& post contract duties for your clients. Above all else, you will be joining a team which is growing in size who work hard in delivering exceptional results for their clients whilst working in a relaxed environment and get along well with everyone in the office. Kingsley is a niche leading recruitment consultancy, specialising in the Property, Legal & Engineering Sectors. We recruit across the UK, so if you are looking for your next career move, call us for a confidential chat. Please see our website for more jobs.
0.070019
LaurenGallagher18 Reviews18 Jennifer Garner's Back in Action in 'Peppermint' - Lauren's Review Read Lauren. Going into Peppermint, I figured I was going to see another tone-deaf gun violence movie similar to Bruce Willis’s Death Wish. While I do think the movie did take a bit of an ignorant approach to the world we live in today, there was more to the story than I expected. Jennifer Garner stars as Riley North, a typical West Coast mom who awakens from a coma to remember that she witnessed the brutal drive-by murder of her husband and daughter at the hands of a ruthless drug gang. After being unable to get justice through a crooked court system, she disappears for 5 years. Suddenly, on the 5 year anniversary of her family’s murders, the 3 men who ran the drive by turn up dead and hung from the Ferris Wheel carnival her family was at…subtle. Jennifer Garner shows back up and is taking names from the people who wronged her. While she solidified her status as a certified bad- ass again, we really don’t learn a lot about how this happened. Apart from the FBI playing a blurry YouTube video of Riley North cage-fighting abroad, we’re supposed to just assume this mainline mom somehow remade herself as a gun-wielding vigilante. While we get a bit more in-depth into Riley’s motives and understand some of the internal struggle that she faces, there really wasn’t a lot of originality to establish this away from Death Wish or Taken other than casting a female lead. The action scenes were fun, but nothing new. Jennifer Garner established herself as an action hero from her hit show Alias but fell away to the typical rom com or mom-type after doing movies like 13 Going on 30 and Juno. She fought her way back in Peppermint, and I’d love to see her take on grittier action roles like this, but hopefully next time with a more original plot. So if you’re looking for the typical gun-shooting vigilante, action-packed movie where the good guy goes bad but still has morals, then look no further than Peppermint. Rating: 3.4 out of 5 Lauren (Contributor) is born and raised in South Jersey. When she isn’t yelling at Philly sports teams on the TV, she enjoys seeing the latest action films and true crime documentaries. Click Here to check out Lauren's Articles. Posted: September 07, 2018 Labels: LaurenGallagher18, Reviews18
0.001088
TITLE: Negative integral on intervals implies negative function? QUESTION [3 upvotes]: Let $f \in L^1([0,1])$ be such that for all $t \geq s$, $\displaystyle \int_s^t f(u)du \leq 0$. Is it true that $f\leq 0$ almost everywhere? REPLY [1 votes]: We will show that if $f$ is integrable and $\int_{(s,t)}f(u)du\geq 0$ for all $s<t$ then $f\geq 0$ almost everywhere. Let $A\subset [0,1]$ Borel measurable and $\varepsilon>0$. As $f$ is integrable, we can find $\delta>0$ such that is $B\subset [0,1]$ is measurable and $\lambda(B)\leq \delta$ then $\left|\int_Bf(u)du\right|\leq \varepsilon$. By regularity of $\lambda$, let $O$ open such that $A\subset O$ and $\lambda(O\setminus A)\leq\delta$. We can write $O=\bigsqcup_{j\geq 1}I_{j}$, where $I_j$ are pairwise disjoint intervals. This gives $$\int_A fd\lambda=\int_Ofd\lambda+\int_{O\setminus A}fd\lambda\geq \sum_{j\geq 1} \int_{I_j}fd\lambda-\varepsilon\geq -\varepsilon.$$ As $\varepsilon$ is arbitrary, we have for all $A$ Borel measurable that $\int_A fd\lambda\geq 0$. Now we apply this to $A=\{x, g(x)<-1/n\}$ for $g$ representing $f$.
0.0088
TITLE: Factorizing the doubly stochastic matrix where all entries are equal such that the factors are all convex combinations of few permutation matrices QUESTION [1 upvotes]: Let $N_{n}=(1/n)_{i=1,j=1}^{n}$ be the $n\times n$-matrix where all the entries are equal. Suppose $n>0$. Let $\delta_{n}$ be the least natural number such that $N_{n}$ can be factored as $N_{n}=A_{1}\dots A_{k}$ for some $k$ and $A_{1},\dots,A_{k}$ such that if $1\leq i\leq k$, then $A_{i}$ is the convex combination of at most $\delta_{n}$ many permutation matrices. Is $\delta_{n}=2$ for all $n>1$? If not, then what are the constants $\delta_{n}$? If $\delta_{n}$ is difficult to calculate exactly, what are some upper or lower bounds on the constants $\delta_{n}$? Observe that $\delta_{n}\leq n$ since $N_{n}$ is the convex combination of $n$ permutation matrices. Lemma: $\delta_{a_{1}\dots a_{r}}\leq\max(\delta_{a_{1}},\dots,\delta_{a_{r}})$. Proof: Suppose that $N_{a_{i}}=A_{i,1}\dots A_{i,k}$ where each $A_{i,j}$ is the convex combination of at most $\delta_{a_{i}}$ many permutation matrices. $$N_{n}=N_{a_{1}}\otimes\dots\otimes N_{a_{r}}$$ $$=(N_{a_{1}}\otimes I_{a_{2}\dots a_{r}})(I_{a_{1}}\otimes N_{a_{2}}\otimes I_{a_{3}\dots a_{r}})\dots(I_{a_{1}\dots a_{r-1}}\otimes N_{a_{r}}).$$ However, for $1\leq i\leq r$, we have $$I_{a_{1}\dots a_{i-1}}\otimes N_{a_{i}}\otimes I_{a_{i+1}\dots a_{r}}= (I_{a_{1}\dots a_{i-1}}\otimes A_{i,1}\otimes I_{a_{i+1}\dots a_{r}})\dots (I_{a_{1}\dots a_{i-1}}\otimes A_{i,k}\otimes I_{a_{i+1}\dots a_{r}}).$$ Q.E.D. As a consequence, we conclude that $\delta_{n}\leq p$ whenever $n>1$ and $p$ is the largest factor of $n$. However, I have shown that $\delta_{3}=2$ experimentally, so the reverse inequality does not always hold. This is a continuation of the series of questions including this question and this question. REPLY [0 votes]: I claim that $\delta_{n}=2$ for all $n$. Lemma: For each $n$, and each vector $[x_{1},\dots,x_{n}]^{T}$ with $x_{1}+\dots+x_{n}=0$, there are matrices $A_{1},\dots,A_{k}$ where each $A_{i}$ is the convex combination of $2$ permutation matrices and where $A_{1}\dots A_{k}[x_{1},\dots,x_{n}]^{T}=\mathbf{0}$. Proof: Let $r$ be the number of non-zero entries in the vector $[x_{1},\dots,x_{n}]^{T}$. Then we shall prove this result by induction on $r$. The result is clear when $r=0,r=1$, so assume $r>1$. Then there is a permutation matrix $A$ such that $A[x_{1},\dots,x_{n}]^{T}=[y_{1},\dots,y_{r-1},y_{r},0,\dots,0]^{T}$ such that $y_{r-1}>0,y_{r}<0$. Let $t=\frac{y_{r-1}}{y_{r-1}-y_{r}}$. Then $$(t\cdot I_{n}+(1-t)\cdot\rho_{(r-1,r)})[y_{1},\dots,y_{r-1},y_{r},0,\dots,0]^{T}$$ $$=t[y_{1},\dots,y_{r-1},y_{r},0,\dots,0]^{T}+(1-t)[y_{1},\dots,y_{r-2},y_{r},y_{r-1},0,\dots,0]^{T}=[y_{1},\dots,y_{r-2},y_{r-1}+y_{r},0,\dots,0]^{T}.$$ Therefore, there are matrices $A_{1},\dots,A_{k}$ where $$A_{1}\dots A_{k}[y_{1},\dots,y_{r-2},y_{r-1}+y_{r},0,\dots,0]^{T}=\mathbf{0}$$ and $A_{i}$ is the convex combination of 2 permutation matrices for $1\leq i\leq k$. Therefore, if we set $A_{k+1}=t\cdot I_{n}+(1-t)\cdot\rho_{(r-1,r)}$ and $A_{k+2}=A$, then $A_{1}\dots A_{k+2}[x_{1},\dots,x_{n}]^{T}=\mathbf{0}$ and $A_{i}$ is the convex combination of 2 permutation matrices whenever $1\leq i\leq k+2$. Q.E.D. Theorem: For all $n>1$, we have $N_{n}=A_{1}\dots A_{k}$ for some $k$ and matrices $A_{1},\dots,A_{k}$ such that $A_{i}$ is the convex combination of $2$ permutation matrices for $1\leq i\leq k$. Proof: Observe that if $A$ is a doubly stochastic matrix, then $A=N_{n}$ if and only if $A(x_{1},\dots,x_{n})^{T}=\mathbf{0}$ whenever $x_{1},\dots,x_{n}$ are real numbers with $x_{1}+\dots+x_{n}=0$. Let $V$ be the space of all column vectors $[x_{1},\dots,x_{n}]^{T}$ with $x_{1}+\dots+x_{n}=0$. Claim: For all $r$ with $0\leq r<n$, there are matrices $A_{1}\dots A_{k}$ where for $1\leq i\leq k$, the matrix $A_{i}$ is the convex combination of two permutation matrices and where $\text{dim}(V\cap\text{null}(A))\geq r$ where $A=A_{1}\dots A_{k}$. Proof: We prove this claim by induction on $r$. The case when $r=0$ is trivial. Suppose now that $A=A_{1}\dots A_{k}$ where $\text{dim}(V\cap\text{null}(A))\geq r$ and where $A_{i}$ is the convex combination of two $n\times n$ permutation matrices for $1\leq i\leq r$. Now, if $\dim(V\cap\text{null}(A))=n-1$, then the induction step is complete. If $\dim(V\cap\text{null}(A))<n-1$, then let $[x_{1},\dots,x_{n}]^{T}\in V\setminus\text{null}(A)$, and let $[y_{1},\dots,y_{n}]^{T}=A[x_{1},\dots,x_{n}]^{T}$. Then there are matrices $A_{-r}\dots A_{0}$ such that $A_{-r}\dots A_{0}[y_{1},\dots,y_{n}]^{T}$ and $A_{i}$ is the convex combination of two permutation matrices for $-r\leq i\leq 0$. Therefore, if $B=A_{-r}\dots A_{0}A_{1}\dots A_{k}$, then $\dim(V\cap\text{null}(B))\geq r+1$. Our claim has been proven. The result follows from our claim in the case that $r=n-1$. Q.E.D. We are in fact able to produce the following explicit factorization of $N_{n}$ Let $$C_{n,r}=\frac{1}{r+1}(I_{n}+r\rho_{(r,r+1)}).$$ Let $$D_{n,r}=C_{n,1}\dots C_{n,r-1}.$$ Theorem: Suppose $n>0$. Then $N_{n}=D_{n,n}\dots D_{n,2}.$ Proof outline: We use induction. Using the induction hypothesis, we have $$D_{n,n-1}\dots D_{n,2}=\begin{bmatrix}N_{n-1} & \mathbf{0}\\ \mathbf{0}&I_{1} \end{bmatrix}.$$ Set $A=D_{n,n-1}\dots D_{n,2}$. Then $\dim(\ker(A)\cap V)=n-2$. Now, $[-1,\dots,-1,n-1]^{T}\in V\setminus\ker(A)$, however, $$D_{n,n-1}A[-1,\dots,-1,n-1]^{T}=D_{n,n-1}[-1,\dots,-1,n-1]^{T}=\mathbf{0},$$ so $\dim(\ker(D_{n,n-1}A)\cap V)>n-2$, but this is only possible if $V\subseteq\ker(D_{n,n-1}A)$. This implies that $D_{n,n-1}A=N_{n}$. Q.E.D. There are other decompositions as well. Theorem: Suppose $n>0$. Then $D_{n,n}^{n-1}=N_{n}$.
0.162992
Open iTunes to preview, buy, and download music. Customer Reviews Fantastic Show!!! I was at this show and it was one of their best sets I have ever seen. I wish that iTunes could have included Eddie's preshow song which was Springsteen's "No Surrender". For those wondering why there are two "Animal" songs, at the beginning of the song the hard drive that they use to record the show crashed right as Eddie was starting to sing. They stopped the show for about five minutes until they fixed it. If you listen closely to the first five songs, you will notice a distinct difference in sound quality. That is because they lost those songs to the hard drive crashing, so you only get the analog recording. Once the second Animal starts, you can hear the stereo recording pick up. Great show with awesome energy. This was their last show of the first leg of this tour, so they pulled out all the stops. It was great to see them finally pull Leash out of the archives for this tour. That was one they would only play occasionally. Also, Rats came out during this tour...very surprising. A reworked Garden was another highlight. After almost four years, and 17 shows later for me, this one still stands out as one of their best sets. Enjoy!!!!!! Biography Formed: 1990 in Seattle, WA Genre: Rock Years Active: '90s, '00s, '10s Top Albums and Songs by Pearl Jam - $9.99 - Genres: Alternative, Music, Rock - Released: Jun 03, 2006 - ℗ 2006 Monkeywrench
0.001524
TITLE: Local nontriviality of genus-one curves over extensions of degree dividing $6^n$ QUESTION [2 upvotes]: Suppose $p\geq 5$ is a prime, and $C$ a genus-one curve, defined over $\mathbf{Q}$. Is there always an extension $K/\mathbf{Q}_{p}$ whose degree divides a power of $6$, so that $C(K)$ is not empty? (I posted that question before, but it was badly formulated. I hope there is no mistake this time.) REPLY [5 votes]: I am just posting my comment above as an answer. As JSE points out, surely there are more methodical approaches to the cohomology groups $H^1_{ \acute{e}t }(\text{Spec}(\mathbb{Q}_p),E)$. However, already the Brauer group of a field gives many nontrivial elements of these cohomology groups. Let $r$ be an integer. Let $Q$ be a field that contains a primitive $r^\text{th}$ root of unity $\zeta$. Let $a,b\in Q^\times\setminus (Q^\times)^r$ be units. There is a symbol algebra that is the quotient of the free associative $Q$-algebra generated by symbols $u$ and $v$ by the left-right ideal generated by the following relations, $$ u^r = a, \ v^r = b, \ vu = \zeta uv.$$ When $r$ equals $2$, this is the usual quaternion algebra. When $Q$ does not contain a primitive $r^\text{th}$ root of unity, there is a generalization of this notion called a cyclic algebra that serves a similar role (and this should extend the examples below to primes $p$ that are not congruent to $1$ modulo $r$). The symbol algebra $A$ is a $Q$-vector space of dimension $r^2$ with monomial basis $u^mv^n$ for $0\leq m,n < r$. For the symbol algebra $A$, there is a $Q$-scheme $P$ whose Yoneda functor on the category of affine $Q$-schemes $\text{Spec}(R)$ is naturally isomorphic to the functor of left ideals in $A\otimes_Q R$ that are locally direct summands of $A\otimes_Q R$ of constant rank $r$. For every field extension $K/Q$ such that $A\otimes_Q R \cong \text{End}_R(V)$ for a locally free $R$-module $V$ of rank $r$, then $P\times_{\text{Spec}(Q)}\text{Spec}(R)$ is isomorphic to the projective space $\mathbb{P}_R(V) = \text{Proj} \text{Sym}^\bullet_R V^\vee$ via the natural transformation that associates to every locally direct summand $L\subset V$ of rank $1$ the annihilator of $L$ as a left ideal of $\text{End}_R(V)$. In particular, the Picard group of $P\otimes_Q Q^{\text{sep}}$ is $\mathbb{Z}$ with trivial Galois action. The obstruction to finding an element in $\text{Pic}(P)$ whose base change is a generator of $\mathbb{Z}$ is an element in the Brauer group $\text{Br}(Q)$ that equals (up to a sign) the Brauer class of the algebra $A$. In particular, assuming $\overline{a}$ has order $r$ in $K^\times/(K^\times)^r$, then $K=Q[\alpha]/\langle \alpha^r - a\rangle$ is a field extension of $Q$ of order $r$ such that $A\otimes_Q K$ is isomorphic to a matrix algebra. Thus, there exists a $Q$-morphism $u:\text{Spec}(K) \to P$ whose base change to $P\otimes_{\text{Spec}(Q)}\text{Spec}(K)\cong \mathbb{P}^{r-1}_K$ consists of $r$ distinct, linearly independent $K$-points that are cyclically permuted by the action of the Galois group $\text{Aut}(K/Q) \cong \mu_r(Q)$. Having chosen a generator $\zeta$ of the group of $r^\text{th}$ roots of unity, there is a unique curve in $P\otimes_{\text{Spec}(Q)}\text{Spec}(K)$ whose image in $\mathbb{P}^{r-1}_K$ is the union of the $r$ lines obtained as the spans of consecutive pairs from among the $r$ cyclically ordered points. This is a projective, geometrically connected, nodal curve of arithmetic genus $1$. Since it is unique, it satisfies the descent condition to equal the base change $C'_Q\otimes_{\text{Spec}(Q)}\text{Spec}(K)$ of a curve $C'_Q \subset P$. Finally, the Hilbert scheme over $K$ parameterizing such curves in $P$ is smooth at this $Q$-point, and the unique irreducible component of the Hilbert scheme parameterizing this $Q$-point has a dense open subset $U$ that parameterizes smooth, projective, geometrically connected curves in $P$ of genus $1$. In the special case that $Q$ equals $\mathbb{Q}_p$, then $Q$ is what is called a "big" or "large" or "ample" field by various authors such as Florian Pop, János Kollár, etc. The point is, because of Hensel's lemma applied to an integral model of the Hilbert scheme over $\text{Spec}(\mathbb{Z}_p)$, the open set $U$ has $\mathbb{Q}_p$-points. Each such $\mathbb{Q}_p$-point gives a smooth, projective, connected curve $C_Q\subset P$ that is of arithmetic genus $1$. For every field extension $L/\mathbb{Q}_p$ such that $C_Q(L)$ is nonempty, in particular $P(L)$ is nonempty. The field extensions that "split" the symbol algebra $A$ all have degree divisible by the order $r$ of $[A]$ in the Brauer group. Finally, by local class field theory (or Grothendieck's exposes on the Brauer group), $\text{Br}(\mathbb{Q}_p)[r]$ is a cyclic group of order $r$ generated by the class of a symbol algebra. Thus, there exists a smooth, projective, geometrically connected, genus $1$ curve over $\text{Spec}(\mathbb{Q}_p)$ such that for every finite field extension $K/\mathbb{Q}_p$ with $C(K)$ nonempty, $r$ divides the degree of $K/\mathbb{Q}_p$.
0.004041
Trina Solar Co., Ltd. ("Trina Solar" or the "Company"), a leading global PV and smart energy total solution provider, has recently published its 2019-20 report on corporate social responsibility. The report, covering two years, is highly comprehensive, prepared in line with the GRI Sustainability Reporting Standards 2018 issued by the Global Sustainability Standards Board. It details the company’s many practices and achievements in relation to CSR and sustainability, such as corporate governance, technology leadership, product innovation, green sustainable development, contributing to society and responding to COVID-19. Click the link to download the report: Corporate transparency and regular information disclosure On June 10 last year Trina Solar issued its first A-Shares on the Shanghai Sci-Tech Innovation Board, also known as the STAR MARKET, becoming the first PV company listed there. In the disclosure period of the CSR report, and from when the company listed, it made 108 interim declarations and issued 3 regular reports. The highly efficient, transparent and regularized information disclosure system, with its complete and accurate declarations, contributes to the company’s excellent reputation in the market and more broadly for being highly scrupulous. Technology leadership From 2010 to 2020 the State Key Laboratory of PV Science and Technology operated by Trina Solar had total investment of about 10 billion RMB in R&D funding. It has become a world-class technical innovation platform. Last year the company’s spending on R&D rose 22.29% over that of 2019, when such spending rose 23.73%. At the end of June 2021 the company and the State Key Laboratory of PV Science and Technology had broken 21 world records with respect to solar cell conversion efficiency and PV module output power. 210 Vertex ultra-high power modules leading industry development Since 2019 Trina Solar has taken the lead in promoting the research and development of 210mm modules. Last year Trina Solar prepared to launch the Vertex ultra-high power modules worldwide and take the lead to realize their industrialization, leading the industry to formally enter the PV 600W+ ultra-high power era. Trina Solar’s 210mm ultra-high power modules and system-integrated new technology platform have pointed the way ahead for the PV industry. Excellent performance in sustainability According to Trina Solar 2020 Sustainable Development Goals, compared with the base year 2015, for consolidated energy consumption per MW module, Trina Solar set the goal of reduction in percentage was 10%. This CSR report disclosed fell 29.5% in 2020 compared with 2015. For one production unit of each electricity power produced (MW), the company pledged to reduce electricity consumption by 15% and water consumption by 10% in 2020 . The report disclosed that last year these targets were well and truly met, electricity consumption falling 59.7% compared with 2015 and water consumption falling 50.6% using the same benchmark. In 2019, the reduction in greenhouse gas emissions in domestic China reached 46% compared with 2015, and last year the performance was even better, a reduction of 68.6% compared with 2015 being met. By the first quarter of 2021, Trina Solar had shipped more than 70GW+ of modules, which can generate about 90 billion kWh of clean-energy power and reduced CO2 emissions by 94.22 million tons per year, equivalent to planting 5.1 billion trees. In 2019 clean-energy generation capacity of solar power plants owned by Trina Solar in China exceeded the power consumption of all the company’s domestic manufacturing plants and R&D centers by 14 million kWh. Last year the corresponding figure was almost 30 times as much, 412 million kWh. Trina Solar has joined the global Science Based Targets initiative and signed the Business Ambition for 1.5°C pledge. This once again underlines the company's commitment to helping to limit the rise in the global temperature to 1.5°C through its own actions to cut emissions. Taking tough measures to combat COVID-19 but minimizing impact on business operations After the outbreak of COVID-19 last year Gao Jifan, chairman of Trina Solar, took immediate action to buy anti-pandemic supplies in China and elsewhere, and donated to pandemic outbreak centers. Trina Solar also set up an internal emergency response mechanism as well as employee care programs to ensure both control and prevention of the pandemic while adhering to normal business operations as practically and safely as possible. Speaking of the company’s performance and the opportunities and challenges ahead, Gao said, “We believe that an era of high growth in new energy is upon us. Trina Solar will remain steadfast in fulfilling its mission, committed to greater responsibility, and always giving back to society.” As a world leader in PV and smart energy total solution provider, Trina Solar has always prided itself on its diligence in fulfilling its corporate social responsibility. Since 2011 it has rigorously abided by CSR reporting standards, publishing reports disclosing Trina Solar's strategies, practices and performance in the field of CSR.
0.987729
Feb 22, 2012 – 4:18 PM ET The rating agency, DBRS, released its 2011 Structured Finance Review and 2012 outlook this week and painted a mixed picture of the market with overall issuance for the year “expected to remain relatively consistent with 2011 level.” In its view, “managing consumer obligations with private sector growth will be key to a healthy securitization market” in 2012. DBRS argues that consumer consumption, which has driven a large part of the past growth in asset backed securities, asset-backed commercial paper and covered bonds won’t be a dominant force. “With the amount of household leverage at an all-time high, issuers continue to tighten their underwritin gpolicies while improving their collection processes to mitigate losses in a potential downturn,” it wrote in a 36-page report. “The continued transition from a stimulus-led recovery to private-sector growth will play an important part as federal and provincial governments try to rein in large public deficits, ensuring long-term sustainability within the capital markets,” added the report. On a positive note DBRS expects “modest growth in securitization transactions in 2012.” It also expects that “auto loan securitization will continue to dominate auto-backed transactions, while equipment floorplan financing will continue to be influenced by the health of the agricultural sector.” It also expects that covered bond issuance will continue to be strong. So far four banks – Scotia, BMO, National and CIBC – have all raised capital in this form with CIBC proving there’s a market for such issues in Swiss francs. All the others issued in US$. The year just passed was dominated by a few themes: • A record level of new issuance of Canadian covered bonds last year. In 2011, Canadian financial institutions borrowed US$25.7-billion in covered bonds – bringing the outstandings to more than US$50-billion. The market was opened up in 2007. During the year, two issuers National Bank of Canada and Caisse centrale Desjardins du Quebec, entered the market for the first time with each raising US$1-billion. Most of that issuance was the U.S. market because U.S. investors favoured Canadian placements and because most of the mortgages pledged as collateral are CMHC insured. Two issuers, CIBC and BNS, sold covered bonds in Australia. • U.S. investors also played a role in the asset backed market: RBC issued a US$825-million through Golden Credit Card Trust while CIBC issued a US$1-billion note through CARDS II. • In the ABS market issuance, for both ABS and ABCP, at $23.4-billion, returned to 2008 levels. In 2011, $6.8-billion was raised from issues backed by credit cards – more than the $5.8-billion that came due for repayment in the same year. At the end of 2011, outstandings in the local ABS market were $91.3-billion, of which almost half ($40-billion) was in term ABS; while ABCP stood at $27.4-billion. At $91.3-billion, the overall market was down by 4.1% from 2010. Posted in: FP Street Tags: bonds, DBRS, Securities Chief Business Correspondent Read Theresa's recent articles Columnist Read Barry's recent articles Financial Services Reporter Read Barbara's recent articles Financial Services Reporter Read John's recent articles
0.995521
Welcome to biology-online.org! Please login to access all site features. Create account. Log me on automatically each visit Moderator: BioTeam by Draco » Mon Jan 21, 2008 10:28 am by mcar » Tue Jan 22, 2008 12:38 pm by alextemplet » Tue Jan 22, 2008 7:24 pm by Draco » Wed Jan 23, 2008 5:20 pm by alextemplet » Thu Jan 24, 2008 1:16 am by mcar » Thu Jan 24, 2008 11:42 am Return to Evolution Users browsing this forum: No registered users and 4 guests © Biology-Online.org. All Rights Reserved. Register | Login | About Us | Contact Us | Link to Us | Disclaimer & Privacy | Powered by CASPION
0.000935
Cheap Flights from Greater Peoria to New York LaGuardia (PIA - LGA) Top last minute flight dealsFeeling spontaneous? It’s never too late to book a trip. Here’s our pick of the best last minute flights to New York... - Wed 1 JulPIALGAWed 8 JulLGAPIAAmerican AirlinesIndirectfrom £194 - Fri 28 FebPIALGAFri 6 MarLGAPIADeltaIndirectfrom £195 - Sat 15 FebPIALGAMon 17 FebLGAPIAUnitedIndirectfrom £229 Flight information Greater Peoria to New York LaGuardia Explore New York Places to stay in New York Frequently asked questions Compare flights from Peoria Peoria. Best of all, Skyscanner is free to use! When is the best time to book from Peoria to New York? - Make sure you’re getting the best price for your flight tickets. Find the best time to book from Peoria Peoria to New York Check the travel information panel above to get exact information about: - Distance from Peoria to New York - Flight time to Peoria from New York - Which airlines provide the cheapest tickets from Peoria to New York - Which airlines fly direct to New York from Peoria. Know your cabin luggage size and weight restrictions thanks to our guide to hand baggage. Track your flight status by checking arrival and departure times from Peoria Peoria..
0.007255
Springfield teachers face tough situations In response to Lael E. H. Chester and Nancy Murray’s Viewpoint in The Sunday Republican criticizing the high number of student arrests, I have to disagree. I have taught in the Springfield Public School System for 18 years and have never seen a student arrested for swearing or banging on lockers. If that were the case I would have witnessed hundreds of students arrested every week. How ridiculous. The only students I have ever seen arrested and taken out of the building by police have been for violent acts against fellow students and teachers. A “call home” or a “time out?” These are ludicrous solutions to a very serious problem. This is just a radical, liberal attempt to incorporate more social programs into an educational system. We are teachers. Not therapists, counselors, psychologists or social workers. I would ask the authors of this viewpoint to spend some time in our classrooms to experience a real life view of the problems facing teachers in the Springfield school system. - JAMES A. CASS Use of Lesley Stahl in Brown ad is odd Just a little something to consider. I always thought that journalists should remain neutral regarding opinions about political candidates. I’m sure that editors would not be pleased to see some of their star reporters in a political advertisement. I am curious if 60 Minutes gave the Scott Brown campaign permission to use Leslie Stahl in their recent ad. Did Ms. Stahl sign something? Was she paid for the use of her image and quote? Was the apparent “endorsement” taken out of context? Wouldn’t it be wonderful if Brian Williams could appear in an Elizabeth Warren Ad? – NANCY MCLAUGHLIN, Agawam U.S. on the path to socialist society Secretary of State Hillary Clinton, lauded the new socialist president of France, Francois Hollande, and said that his election represents a new opportunity for “growth” in the economy of Europe. President Obama at the G-8 meetings stated that “growth,” not austerity, is needed for the world economy and that of the United States. At the same time Germany has been criticized for austerity, the antithesis of “growth.” Thus the new word promulgated by the Obama administration and throughout the media in reference to the economy is “growth.” Sounds good, huh? We should definitely do it. Grow the economy. Yet we fought a Cold War for many years against the evils of socialism. . . now we are embracing it? What is all this talk about “growth” and “economic stimulus”?” Do these words make it seem more attractive by purposely using terms that appear positive? Socialism NEVER brings growth and stimulus, just the opposite. Socialism brings repression and dependency upon government. This has been proven by history and the failures of many countries that have embarked upon it. So the plan when you are deeply in debt, even bankrupt, is to spend more? Spend more vs. being economically responsible and spending what you can afford? What school of economics teaches this? The real truth is that irresponsible, out of control spending leads to economic disaster, and worse. People are pushed down to a lower form of existence and become totally dependent upon the government for survival. In this state most of their freedoms are lost. Eventually they become dissatisfied with their situation and in order to maintain control the government becomes more repressive and totalitarian. Massive debt, and out-of-control irresponsible spending will bankrupt America and all countries that pursue this policy. - AL DiLASCIA, Chicopee George Will column paints scary image While I have never considered George Will to be a fear monger or an alarmist, his account of “United States of America vs. 434 Main Street, Tewksbury, Massachusetts” is chilling. Of course it is frightening to read that the government has seized Pat and Russ’ property, depriving them of income and retirement security. It is even more shocking to read that a federal agent in this area is reading through public records to look for targets! So, the answer to municipal budget shortfalls is “equitable sharing” or grabbing property with a certain amount of equity if you’re the hapless property owner. Even George Orwell might be aghast at such violations of the Constitution, and civil rights. – CAMILLE CASTRO, Chicopee
0.831138
Buy Local Holiday Marketplace WHEN: Sunday, November 25, 2018 (7th annual) – 11AM to 4PM The Buy Local Holiday Marketplace is an annual event usually held the Sunday after Thanksgiving. WHERE: Scranton Cultural Center at the Masonic Temple, 420 North Washington Avenue, Scranton PA 18503 (Lackawanna County Pennsylvania Holiday Shops, Vendor Fairs, Craft Shows.) DESCRIPTION: Do you like to support small business and shop locally? Then this event is just for you! Just in time for the holiday season, the Scranton Cultural Center at the Masonic Temple will proudly present its Annual Buy Local Holiday Marketplace. Each year this event gets bigger and better than the last! Take a walk through our magnificent building and browse the wares of some of our area’s finest local businesses, artists and crafters! From apparel and jewelry to candles and home goods, from wine and chocolates to fine art and handmade clothing, there will truly be something for everyone! Admission is $2. Food and drinks will be available for purchase. Free reusable shopping bag to the 1st 1,000 shoppers courtesy of FNCB Bank. For full details visit sccmt.org/buylocal Vendors Needed! Arts and crafts vendors wanted. Please Note: We will not accept commercial food vendors. In addition, the following items will be prohibited from sale: animals, weaponry, sexually explicit material, personal services, pirated CDs and DVDs and unauthorized ‘knock-offs’ of any kind. EVENT CONTACT: Rachael Fronduti at buylocal@sccmt.org or 570-346-7369 WEBSITE: 2018 EVENT FACEBOOK:
0.021499
TITLE: How random are unit lattices in number fields? QUESTION [24 upvotes]: I was wondering how random unit lattices in number fields are. To make this more precise: If $K$ is a number field with embeddings $\sigma_1, \dots, \sigma_n, \overline{\sigma_{r+1}}, \dots, \overline{\sigma_n} \to \mathbb{C}$ (so we have $r$ real embeddings and $2 (n - r)$ complex embeddings), let $\mathcal{O}_K$ be the ring of integers and $\Lambda_K := \{ (\log |\sigma_1(\varepsilon)|^{d_1}, \dots, \log |\sigma_n(\varepsilon)|^{d_n}) \mid \varepsilon \in \mathcal{O}_K^\ast \}$ be the unit lattice, where $d_i = 1$ if $\sigma_i(K) \subseteq \mathbb{R}$ and $d_i = 2$ otherwise. Then $\Lambda_K$ is always contained in $H := \{ (x_1, \dots, x_n) \in \mathbb{R}^n \mid \sum_{i=1}^n x_i = 0 \}$, and $\det \Lambda_K$ is the regulator $R_K$ of $K$. Let us normalize $\Lambda_K$ by $\hat{\Lambda}_K := \frac{1}{\sqrt[n]{R_K}} \Lambda_K$; then $\det \hat{\Lambda}_K = 1$. Now my question is: can we say something on how random the lattices $\hat{\Lambda}_K$ are among all lattices in $H$ of determinant 1? (For example, for fixed signature $(r, n-r)$ of $K$.) Since these lattices are not completely random (they consist of vectors of logarithms of algebraic numbers), it is maybe better to ask something like this: Given $\varepsilon > 0$ and a lattice $\Lambda \subseteq H$ with determinant 1, does there exists a number field $K$ of signature $(r, n - r)$ such that there is a basis $(v_i)_i$ of $\hat{\Lambda}_K$ and a basis $(w_i)_i$ of $\Lambda$ such that $\|v_i - w_i\| < \varepsilon$ for all $i$? And if this exists, can one bound the discriminant of $K$ (or any other invariant of $K$) in terms of $\varepsilon$? (Of course, this question is only interesting when $n > 2$.) I assume that this is a very hard problem, so I'd be happy about any hint on whether something about this is known, whether someone is working on this, how one could proof such things, etc. REPLY [17 votes]: This question was certainly discussed over past years, with no proven results though (as far as I am aware). I learned it from M. Gromov about 15 years ago (probably after he discussed it with G. Margulis). Here how I would formulate it: Let us fix the signature $(r, n-r)$ (for example, $(3,0)$ for totaly real cubic fields -- the simplest non-trivial example which I researched numerically to some extend). Let $F_{(r,n-r)}$ be the set of all such number fields. For a field $K\in F_{(r,n-r)}$, consider the unit lattice $\mathcal{O}_K^*$, and its normalized (by the volume) logarithmic embedding $\hat{\Lambda}_K\subset \mathbb{R}^n$ as above. Hence we obtain for each field of fixed signature, a unimodular lattice in $ \mathbb{R}^{n-1}$. We are interested in the type of such a lattice. It is reasonable to consider lattices up to isometries of the ambient $\mathbb{R}^{n-1}$. Hence we obtain for each field $K\in F_{(r,n-r)}$, a point $x_K$ in the the moduli space of unimodilar lattices in $ \mathbb{R}^{n-1}$ up to an isometry. I call this the conformal type of the unit lattice of the field. This moduli space is the familiar space $X_{n-1}=SL(n-1,\mathbb{Z})\setminus SL(n-1,\mathbb{R})/SO(n-1)$. For $n=3$ this is the modular curve. The space $X_{n-1}$ has a distinctive probability measure $\mu$ (coming from the right invariant measure on $SL(n-1,\mathbb{Z})\setminus SL(n-1,\mathbb{R})$; for $n=3$ this is the (volume one) hyperbolic measure on the modular curve). The natural question would be what is the behavior of the set of points $x_K\in X_{n-1}$, $K\in F_{(r,n-r)}$ with respect to the measure $\mu$, or the geometry of $X_{n-1}$ (recall that the space $X_{n-1}$ is not compact, and there are cusps). To formulate conjectures/questions, one need to introduce an order on the set of fields of fixed signature. I am aware of $3$ (probably inequivalent) natural orderings: Arithmetic order: order the the set $F_{(r,n-r)}$ by the discriminant $d_K$ of the field. Geometric order: order the the set $F_{(r,n-r)}$ by the regulator $R_K$ of the field. Dynamical order: order the the set $F_{(r,n-r)}$ by the shortest unit $\epsilon_K$ of the field. Numerical experiments (with tables provided by PARI) suggest the following conjecture: Conjecture: The set of points $x_K$, $K\in F_{(r,n-r)}$ becomes equidistributed in $X_{n-1}$ with respect to $\mu$ when $F_{(r,n-r)}$ is ordered arithmetically. As far as I understand, Margulis expects that when ordered dynamically, points in $F_{(r,n-r)}$ escape to infinity (i.e., to the cusp) with the probability $1$ (certainly some points may stay low, e.g., coming from Galois fields). Probably one should expect the same with respect to the geometric ordering (i.e., by the regulator). It is very difficult to have numerical data for these orderings. The question about density of points $x_K$ in $X_{n-1}$ would follow from equidistribution, but of course is somewhat separate. In particular even if Margulis is right, this would not mean that $x_K$'s are not dense (its quite possible that this is still true). I do not know about (effective) approximation of a lattice by the unit lattice. I also would like to mention that the additive analog of this question for totally real cubic fields (ordered by the discriminant) was solved by D. Terr (PhD, Berkeley, 1997, unpublished).
0.101209
TITLE: Show $\sum_{\mbox{cyc}}\frac{3a^2+ac-c^2}{a^3+c^3}(a-c)^2 \geq \sum_{\mbox{cyc}}\frac{3a^2+ab-b^2}{a^3+b^3}(a-b)^2 \geq 0$ QUESTION [0 upvotes]: For positive $a,b,c$ with $a\geq b \geq c$, show $$\sum_{\mbox{cyc}}\frac{3a^2+ac-c^2}{a^3+c^3}(a-c)^2 \geq \sum_{\mbox{cyc}}\frac{3a^2+ab-b^2}{a^3+b^3}(a-b)^2 \geq 0$$ Note that the two cyclic sums are not equivalent (even though they look to be identical with $b \leftrightarrow c$) because we have the convention that $$ \sum_{\mbox{cyc}} f(a,b,c) \equiv f(a,b,c) + f(b,c,a) + f(c,a,b)$$ which is not, in this case, the same as $f(a,c,b) + fc,b,a) + f(b,a,c)$. Obviously, this is equivalent to saying that if $c\geq b \geq a >0$, then $$\sum_{\mbox{cyc}}\frac{3a^2+ab-b^2}{a^3+b^3}(a-b)^2 \geq \sum_{\mbox{cyc}}\frac{3a^2+ac-c^2}{a^3+c^3}(a-c)^2 \geq 0$$ The $\sum_{\mbox{cyc}}\frac{3a^2+ac-c^2}{a^3+c^3}(a-c)^2 \geq 0$ part of this problem cropped up in the context of proving a slightly tougher inequality, and an unclear or invalid proof is offered on https://artofproblemsolving.com/community/c6h22937p427220 (at least, I don't see how the proof is valid). I can prove either of the $\geq 0$ statements by letting $a = b+x = c+x+y$ with non-negative $x$ and $y$, writing out the sum, and clearing the denominators: The resulting degree $10$ multinomial ends up with no negative coefficients. I'm looking for a cleaner proof, and a proof that the ordering of the two sums is tied to whether the ordering of $a,b,c$ is an odd or even permutation of $a\geq b\geq c$ REPLY [1 votes]: We need to prove that $$\sum_{cyc}\frac{(3b^2+ab-a^2)(a-b)^2}{a^3+b^3}\geq\sum_{cyc}\frac{(3a^2+ab-b^2)(a-b)^2}{a^3+b^3}$$ or $$\sum_{cyc}\frac{(4b^2-4a^2)(a-b)^2}{a^3+b^3}\geq0$$ or $$\sum_{cyc}\frac{(a-b)^3}{a^2-ab+b^2}\leq0$$ or $$\sum_{cyc}\frac{(a-b)(a^2-ab+b^2-ab)}{a^2-ab+b^2}\leq0$$ or $$\sum_{cyc}\frac{ab(a-b)}{a^2-ab+b^2}\geq0$$ $$\sum_{cyc}(a^4b^3-a^4c^3-a^4b^2c+a^4c^2b)\geq0$$ or $$(a-b)(a-c)(b-c)(a^2b^2+a^2c^2+b^2c^2)\geq0,$$ which is true for $a\geq b\geq c$.
0.017338
Legal Question in Real Estate Law in India Consent Witness. Dear Sir/Madam: Subject: Consent Witness. There is a property which is ancestral. Father is not there anymore. Legal heirs are myself, mother, brother and sister. Mother and brother want sister to sign as a consenting witness and not as a vendor as they have spoken to her to give money at a later date. But, the purchaser is saying that sister signing as a consenting witness is not possible because tomorrow she may create some problem. Is it possible to create some problem after signing as a consenting witness? And, if so, what is the solution? 2 Answers from Attorneys Re: Consent Witness. What is meant by consenting witness? There is no such terminology used under the T.P. Act or Registration Act. Re: Consent Witness. There is no such in law as a "consenting witness". The purchaser is right. She may need to sign as one of the vendors; better to settle her share at this stage and make her also as a vendor. Related Questions & Answers - NA 44 is possible for individual plot? I am having house in Aurangabad (MS) built on... Asked 11/13/08, 12:27 pm in India Real Estate and Real Property - SRA Flats Can one sell his/her flat received in SRA Scheme? Asked 11/13/08, 7:53 am in India Real Estate and Real Property - Amnesty Scheme Stamp Duty Done agreement of resale flat in oct.85. I am... Asked 11/12/08, 3:41 am in India Real Estate and Real Property - Amnesty scheme 2008 I have purchased my flat on 5.3.1975. Do I have to pay any stamp... Asked 11/12/08, 2:09 am in India Real Estate and Real Property - Senior citizen widow denied claim to husband's flat We live in Mumbai in a... Asked 11/10/08, 7:41 am in India Real Estate and Real Property
0.008103
Catholic Schools of Broome County Wins Rep. Brindisi’s (NY-22) 2019 Congressional App Challenge Rep. Anthony Brindisi has named a Binghamton student as the winner of the Congressional App Challenge in New York’s 22nd district. Seton Catholic Central’s Luke Redmore submitted Catholic Schools of Broome County, an app allowing students and families to view schedules, menus, grades, and much more, along with the ability to receive important alerts from principals. When asked why they were passionate about creating an app that promotes safety the students replied,“Most school districts in my area now have their own app, and I felt that we were missing out on the ease that they bring to families. The project started just as an idea and personal challenge for myself. Once I realized its potential, I ran away with it and created something that I’m proud to say my peers greatly appreciate.”.
0.947707
\begin{document} \title{Duality for Topological Modular Forms} \author{Vesna Stojanoska} \address{ Vesna Stojanoska, Massachusetts Institute of Technology, Cambridge MA 02139} \email{vstojanoska@math.mit.edu} \begin{abstract} It has been observed that certain localizations of the spectrum of topological modular forms are self-dual (Mahowald-Rezk, Gross-Hopkins). We provide an integral explanation of these results that is internal to the geometry of the (compactified) moduli stack of elliptic curves $ \M $, yet is only true in the derived setting. When $ 2 $ is inverted, a choice of level $ 2 $ structure for an elliptic curve provides a geometrically well-behaved cover of $ \M $, which allows one to consider $ Tmf $ as the homotopy fixed points of $ Tmf(2) $, topological modular forms with level $ 2 $ structure, under a natural action by $ GL_2(\Z/2) $. As a result of Grothendieck-Serre duality, we obtain that $ Tmf(2) $ is self-dual. The vanishing of the associated Tate spectrum then makes $ Tmf $ itself Anderson self-dual. \end{abstract} \maketitle \section{Introduction} There are several notions of duality in homotopy theory, and a family of such originates with Brown and Comenetz. In \cite{\BrownComenetz}, they introduced the spectrum $ \IQZ $ which represents the cohomology theory that assigns to a spectrum $ X $ the Pontryagin duals $ \Hom_{\Z}(\pi_{-*}X,\Q/\Z) $ of its homotopy groups. Brown-Comenetz duality is difficult to tackle directly, so Mahowald and Rezk \cite{\MahowaldRezk} studied a tower approximating it, such that at each stage the self-dual objects turn out to have interesting chromatic properties. In particular, they show that self-duality is detected on cohomology as a module over the Steenrod algebra. Consequently, a version of the spectrum $ tmf $ of topological modular forms possesses self-dual properties. The question thus arises whether this self-duality of $ tmf $ is already inherent in its geometric construction. Indeed, the advent of derived algebraic geometry not only allows for the construction of an object such as $ tmf $, but also for bringing in geometric notions of duality to homotopy theory, most notably Grothendieck-Serre duality. However, it is rarely possible to identify the abstractly constructed (derived) dualizing sheaves with a concrete and computable object. This stands in contrast to ordinary algebraic geometry, where a few smallness assumptions guarantee that the sheaf of differentials is a dualizing sheaf (eg. \cite[III.7]{\HartshorneAG}). Nevertheless, the case of the moduli stack of elliptic curves\footnote{The moduli stack of elliptic curves is sometimes also denoted $ \M_{ell} $ or $ \M_{1,1} $; we chose the notation from \cite{\DeligneRapoport}.} $ \M^0 $ and topological modular forms allows for a hands-on approach to Serre duality in a derived setting. Even if the underlying ordinary stack does not admit a traditional Serre duality pairing, the derived version of $ \M^0 $ is considerably better behaved. The purpose of this paper is to show how the input from homotopy theory brings duality back into the picture. Conversely, it provides an integral interpretation of the aforementioned self-duality for $ tmf $ that is inherent in the geometry of the moduli stack of elliptic curves. The duality that naturally occurs from the geometric viewpoint is not quite that of Brown and Comenetz, but an integral version thereof, called Anderson duality and denoted $ \IZ $ \cite{A1,\HopkinsSinger}. After localization with respect to Morava $ K $-theory $ K(n) $ for $ n>0 $, however, Anderson duality and Brown-Comenetz duality only differ by a shift. Elliptic curves have come into homotopy theory because they give rise to interesting one-parameter formal groups of heights one or two. The homotopical version of these is the notion of an elliptic spectrum: an even periodic spectrum $ E $, together with an elliptic curve $ C $ over $ \pi_0 E $, and an isomorphism between the formal group of $ E $ and the completion of $ C $ at the identity section. \'Etale maps $ \Spec \pi_0 E\to \M^0 $ give rise to such spectra; more strongly, as a consequence of the Goerss-Hopkins-Miller theorem, the assignment of an elliptic spectrum to an \'etale map to $ \M^0 $ gives an \'etale sheaf of $ E_\infty $-ring spectra on the moduli stack of elliptic curves. Better still, the compactification of $ \M^0 $, which we will hereby denote by $ \M $, admits such a sheaf, denoted $ \Sh{O}^{top} $, whose underlying ordinary stack is the usual stack of generalized elliptic curves \cite{\ConstructTMF}. The derived global sections of $ \Sh{O}^{top} $ are called $ Tmf $, the spectrum of topological modular forms. This is the non-connective, non-periodic version of $ Tmf $.\footnote{We chose this notation to distinguish this version of topological modular forms from the connective $ tmf $ and the periodic $ TMF $.} The main result proved in this paper is the following theorem: \begin{thmM} The Anderson dual of $ Tmf[1/2] $ is $ \Sigma^{21}Tmf[1/2] $. \end{thmM} The proof is geometric in the sense that it uses Serre duality on a cover of $ \M $ as well as descent, manifested in the vanishing of a certain Tate spectrum. \subsection*{Acknowledgments} This paper is part of my doctoral dissertation at Northwestern University, supervised by Paul Goerss. His constant support and guidance throughout my doctoral studies and beyond have been immensely helpful at every stage of this work. I am indebted to Mark Behrens, Mark Mahowald, and Charles Rezk for very helpful conversations, suggestions, and inspiration. I am also grateful to the referee who has made extraordinarily thorough suggestions to improve an earlier version of this paper, and to Mark Behrens and Paul Goerss again for their patience and help with the material in Section \ref{sec:dermoduli}. Many thanks to Tilman Bauer for creating the sseq latex package, which I used to draw the spectral sequences in Figures \ref{Fig:tmf2} through \ref{fig:htpyorbit}. \section{Dualities}\label{sec:dualities} We begin by recalling the definitions and properties of Brown-Comenetz and Anderson duality. In \cite{\BrownComenetz}, Brown and Comenetz studied the functor on spectra \[ X\mapsto I_{\Q/\Z}^*(X)=\Hom_{\Z}(\pi_{-*}X,\Q/\Z). \] Because $ \Q/\Z $ is an injective $ \Z $-module, this defines a cohomology theory, and is therefore represented by some spectrum; denote it $ \IQZ $. We shall abuse notation, and write $ \IQZ $ also for the functor \[ X\mapsto F(X,\IQZ), \] which we think of as ``dualizing" with respect to $ \IQZ $. And indeed, if $ X $ is a spectrum whose homotopy groups are finite, then the natural ``double-duality" map $ X\to \IQZ \IQZ X $ is an equivalence. In a similar fashion, one can define $ \IQ $\footnote{In fact, $ \IQ $ is the Eilenberg-MacLane spectrum $ H\Q $.} to be the spectrum corepresenting the functor \[ X\mapsto \Hom_{\Z}(\pi_{-*}X,\Q). \] The quotient map $ \Q\to\Q/\Z $ gives rise to a natural map $ \IQ\to\IQZ $; in accordance with the algebraic situation, we denote by $ \IZ $ the fiber of the latter map. As I have learned from Mark Behrens, the functor corepresented by $ \IZ $ has been introduced by Anderson in \cite{A1}, and further employed by Hopkins and Singer in \cite{\HopkinsSinger}. For $ R $ any of $ \Z $, $ \Q $, or $ \Q/\Z $, denote also by $ I_R $ the functor on spectra $ X\mapsto F(X,I_R) $. When $ R $ is an injective $ \Z $-module, we have that $ \pi_*I_RX=\Hom_{\Z}(\pi_{-*}X,R) $. Now for any spectrum $ X $, we have a fiber sequence \begin{align}\label{IZresolution} \IZ X\to\IQ X\xrightarrow{\varphi} \IQZ X, \end{align} giving a long exact sequence of homotopy groups \[ \xymatrix@C=-5pt{ \cdots && \pi_{t+1}\IQZ X \ar[rr] \ar[rd] && \pi_t \IZ X \ar[rr]\ar[rd] && \pi_t\IQ X &&\cdots\\ &&& \Coker \pi_{t+1}\varphi \ar[ur] && \Ker \pi_t \varphi\ar[ur]\\ }\] But the kernel and cokernel of $ \pi_*\varphi $ compute the derived $ \Z $-duals of the homotopy groups of $ X $, so we obtain short exact sequences \[ 0\to \Ext^1_{\Z}(\pi_{-t-1}X,\Z)\to \pi_t\IZ X \to \Hom_{\Z}(\pi_{-t}X,\Z)\to 0.\] We can think of these as organized in a two-line spectral sequence: \begin{align}\label{ss:AndersonDual} \Ext^s_{\Z}(\pi_{t}X,\Z)\Rightarrow \pi_{-t-s}\IZ X. \end{align} \begin{rem}\label{rem:Anderson} In this project we will use Anderson duality in the localization of the category of spectra where we invert some integer $ n $ (we will mostly be interested in the case $ n=2 $). The correct way to proceed is to replace $ \Z $ by $ \Z[1/n] $ everywhere. In particular, the homotopy groups of the Anderson dual of $ X $ will be computed by $ \Ext^s_{\Z[1/n]}(\pi_{t}X,\Z[1/n]) $. \end{rem} \subsection{Relation to $ K(n) $-local and Mahowald-Rezk Duality} This section is a digression meant to suggest the relevance of Anderson duality by relating it to $ K(n) $-local Brown-Comenetz dualty as well as Mahowald-Rezk duality. We recall the definition of the Brown-Comenetz duality functor in the $ K(n) $-local category from \cite[Ch. 10]{\HoveyStrickland}. Fix a prime $ p $, and for any $ n\geq 0$, let $ K(n) $ denote the Morava $ K $-theory spectrum, and let $ L_n $ denote the Bousfield localization functor with respect to the Johnson-Wilson spectrum $ E(n) $. There is a natural transformation $ L_n\to L_{n-1} $, whose homotopy fiber is called the monochromatic layer functor $ M_n $, and a homotopy pull-back square \begin{align}\label{fracturesquare} \xymatrix{ L_n \ar[r]\ar[d] & L_{K(n)}\ar[d] \\ L_{n-1} \ar[r] & L_{n-1}L_{K(n)}, } \end{align} implying that $ M_n $ is also the fiber of $ L_{K(n)} \to L_{n-1}L_{K(n)}$. The $ K(n) $-local Brown-Comenetz dual of $ X $ is defined to be $ I_nX=F(M_nX,\IQZ) $, the Brown-Comenetz dual of the $ n $-th monochromatic layer of $ X $. By \eqref{fracturesquare}, $ I_nX $ only depends on the $ K(n) $-localization of $ X $ (since $ M_nX $ only depends on $ L_{K(n)}X $), and by the first part of the proof of Proposition \ref{prop:local} below, $ I_nX $ is $ K(n) $-local. Therefore we can view $ I_n $ as an endofunctor of the $ K(n) $-local category. Note that after localization with respect to the Morava $ K $-theories $ K(n) $ for $ n\geq 1 $, $ \IQ X $ becomes contractible since it is rational. The fiber sequence \eqref{IZresolution} then gives that $ L_{K(n)}\IQZ X= L_{K(n)}\Sigma\IZ X $. By Proposition \ref{prop:local} below, we have that $ I_nX $ is the $ K(n) $-localization of the Brown-Comenetz dual of $ L_nX $. In particular, if $ X $ is already $ E(n) $-local, then \begin{align}\label{AndersonGrossHopkins} I_nX= L_{K(n)}\Sigma\IZ X. \end{align} In order to define the Mahowald-Rezk duality functor \cite{\MahowaldRezk} we need some preliminary definitions. Let $ T_i $ be a sequence of finite spectra of type $ i $ and let $ \Tel(i) $ be the mapping telescope of some $ v_i $ self-map of $ T_i $. The finite localization $ L_n^f $ of $ X $ is the Bousfield localization with respect to the wedge $ \Tel(0)\vee\cdots \vee \Tel(n) $, and $ C_n^f $ is the fiber of the localization map $ X\to L_n^fX $. A spectrum $ X $ satisfies the $ E(n) $-telescope conjecture \cite[10.5]{\RavenelLocalization} if and only if the natural map $ L_n^fX\to L_n X $ is an equivalence (eg. \cite{\MahowaldRezk}). Let $ X $ be a spectrum whose $ \F_p $-cohomology is finitely presented over the Steenrod algebra; the Mahowald-Rezk dual $ W_n $ is defined to be the Brown-Comenetz dual $ \IQZ C_n^f $. Suppose now that $X$ is an $ E(n) $-local spectrum which satisfies the $ E(n) $ and $ E(n-1) $-telescope conjectures. This condition is satisfied by the spectra of topological modular forms \cite[2.4.7]{\Behrens} with which we are concerned in this work. Then the monochromatic layer $ M_nX $ is the fiber of the natural map $ L_n^fX\to L_{n-1}^fX $. Taking the Brown-Comenetz dual of the first column in the diagram of fiber sequences \[ \xymatrix{ \Sigma^{-1}M_nX \ar[r]\ar[d] &\star \ar[r]\ar[d] & M_n X\ar[d]\\ C_n^fX\ar[r]\ar[d] & X \ar[r]\ar@{=}[d] & L_n^f X\ar[d]\\ C_{n-1}^fX\ar[r] & X\ar[r] &L^f_{n-1}X } \] implies the fiber sequence \cite[2.4.4]{\Behrens} \begin{align*} W_{n-1}X\to W_n X\to L_{K(n)}\IZ X, \end{align*} relating Mahowald-Rezk duality to $ K(n) $-local Anderson duality. We have used the following result. \begin{proposition}\label{prop:local} For any $ X $ and $ Y $, the natural map $ F(L_nX,Y)\to F(M_nX,Y) $ is $ K(n) $-localization. \end{proposition} \begin{proof} First of all, we need to show that $ F(M_nX,Y) $ is $ K(n) $-local. This is equivalent to the condition that for any $ K(n) $-acyclic spectrum $ Z $, the function spectrum $ F(Z,F(M_nX,Y))=F(Z\wedge M_nX, Y) $ is contractible. But the functor $ M_n $ is smashing, and is also the fiber of $ L_{K(n)}\to L_{ n-1}L_{K(n)}$, by the homotopy pull-back square \eqref{fracturesquare}. Therefore, for $ K(n) $-acyclic $ Z $, $ M_nZ $ is contractible, and the claim follows. It remains to show that the fiber $ F(L_{n-1}X,Y) $ is $ K(n) $-acyclic. To do this, smash with a generalized Moore spectrum $ T_n $. This is a finite, Spanier-Whitehead self-dual (up to a suspension shift) spectrum of type $ n $, which is therefore $ E(n-1) $-acyclic. A construction of such a spectrum can be found in Section 4 of \cite{\HoveyStrickland}. We have (up to a suitable suspension) \begin{align*} F(L_{n-1}X,Y)\wedge T_n&=F(L_{n-1}X,Y)\wedge DT_n=F\bigl(T_n,F(L_{n-1}X,Y)\bigr)\\ &=F\bigl((L_{n-1}T_n)\wedge X,Y\bigr)=*, \end{align*} implying that $ F(L_{n-1}X,Y) $ is $ K(n) $-acyclic, which proves the proposition. \end{proof} \section{Derived Stacks} In this section we briefly recall the notion of derived stack which will be useful to us; a good general reference for the classical theory of stacks is \cite{\Champs}. Roughly speaking, stacks arise as solutions to moduli problems by allowing points to have nontrivial automorphisms. The classical viewpoint is that a stack is a category fibered in groupoids. One can also equivalently view a stack as a presheaf of groupoids \cite{\Hollander}. Deligne-Mumford stacks are particularly well-behaved, because they can be studied by maps from schemes in the following sense. If $ \Sh{X} $ is a Deligne-Mumford stack, then $ \Sh{X} $ has an \'etale cover by a scheme, and any map from a scheme $ S $ to $ \Sh{X} $ is representable. (A map of stacks $ f:\Sh{X}\to \Sh{Y} $ is said to be representable if for any scheme $ S $ over $ \Sh{Y} $, the ($2$-categorical) pullback $ \Sh{X}\cross{\Sh{Y}}S $ is again a scheme.) Derived stacks are obtained by allowing sheaves valued in a category which has homotopy theory, for example differential graded algebras, simplicial rings, or commutative ring spectra. To be able to make sense of the latter, one needs a nice model for the category of spectra and its smash product, with a good notion of commutative rings and modules over those. Much has been written recently about derived algebraic geometry, work of Lurie on the one hand \cite{\Algebra,\Topos,\DAG} and To\"en-Vezossi on the other \cite{\HAGI,\HAGII}. For this project we only consider sheaves of $ E_\infty $-rings on ordinary sites, and consequently we avoid the need to work with infinity categories. Derived Deligne-Mumford stacks will be defined as follows. \begin{definition} A \emph{derived Deligne-Mumford stack} is a pair $ (\Sh{X},\SSh{\Sh{X}}^{top}) $ consisting of a topological space (or a Grothendieck topos) $ \Sh{X} $ and a sheaf $ \SSh{\Sh{X}}^{top} $ of $ E_\infty $-ring spectra on its small \'etale site such that \begin{enumerate} \item[(a)] the pair $ (\Sh{X},\pi_0\SSh{\Sh{X}}^{top}) $ is an ordinary Deligne-Mumford stack, and \item[(b)] for every $ k $, $ \pi_k\SSh{\Sh{X}}^{top} $ is a quasi-coherent $ \pi_0\SSh{\Sh{X}}^{top} $-module. \end{enumerate} \end{definition} Here and elsewhere in this paper, by $ \pi_*\Sh{F} $ of a sheaf of $ E_\infty $-rings we will mean the sheafification of the presheaf $ U\mapsto \pi_*\Sh{F}(U) $. Next we discuss sheaves of $E_{\infty}$-ring spectra. Let $ \Cat{C} $ be a small Grothendieck site. By a \emph{presheaf} of $ E_\infty $-rings on $ \Cat{C} $ we shall mean simply a functor $ \Sh{F}:\Cat{C}^{op}\to \Rings $. The default example of a site $\Cat{C} $ will be the small \'etale site $ \Sh{X}_{\acute{e}t} $ of a stack $ \Sh{X} $ \cite[Ch. 12]{\Champs}. A presheaf $\Sh{F}$ of $ E_\infty $-rings on $\Cat{C}$ is said to satisfy {\em hyperdescent} or that it is a {\em sheaf} provided that the natural map $ \Sh{F}(X)\to\holim \Sh{F}(U_{\bullet}) $ is a weak equivalence for every hypercover $U_{\bullet}\to X$. Hyperdescent is closely related to fibrancy in Jardine's model category structure \cite{\JardineSimplicial,\JardineSpectra}. Specifically if $F\to QF$ is a fibrant replacement in Jardine's model structure, then \cite{\DHI} shows that $F$ satisfies hyperdescent if and only if $F(U)\to QF(U)$ is a weak equivalence for all $U$. When the site $\Cat{C}$ has enough points, one may use Godement resolutions in order to ``sheafify" a presheaf \cite[Section 3]{\JardineSimplicial}. In particular, since $ \Sh{X}_{\acute{e}t} $ has enough points we may form the Godement resolution $\Sh{F}\to G\Sh{F}$. The global sections of $ G\Sh{F} $ are called the derived global sections of $ \Sh{F} $ and Jardine's construction also gives a spectral sequence to compute the homotopy groups \begin{align}\label{ss:jardine} H^s(\Sh{X},\pi_t\Sh{F})\Rightarrow \pi_{t+s}R\Gamma\Sh{F}. \end{align} \section{Moduli of Elliptic Curves and Level Stuctures}\label{sec:moduli} In this section, we summarize the results of interest regarding the moduli stacks of elliptic curves and level structures. Standard references for the ordinary geometry are Deligne-Rapoport \cite{\DeligneRapoport}, Katz-Mazur \cite{\KatzMazur}, and Silverman \cite{\Silverman}. A \emph{curve} over a base scheme (or stack) $ S $ is a proper, flat morphism $ p: C\to S $, of finite presentation and relative dimension one. An \emph{elliptic curve} is a curve $ p:C\to S $ of genus one whose geometric fibers are connected, smooth, and proper, equipped with a section $ e:S\to C $ or equivalently, equipped with a commutative group structure \cite[2.1.1]{\KatzMazur} . These objects are classified by the moduli stack of elliptic curves $ \M^0 $. The $ j $-invariant of an elliptic curve gives a map $ j:\M^0 \to \A^1$ which identifies $\A^1 $ with the coarse moduli scheme \cite[8.2]{\KatzMazur}. Thus $ \M^0 $ is not proper, and in order to have Grothendieck-Serre duality this is a property we need. Hence, we shall work with the compactification $ \M $ of $ \M^0 $, which has a modular description in terms of generalized elliptic curves: it classifies proper curves of genus one, whose geometric fibers are connected and allowed to have an isolated nodal singularity away from the point marked by the specified section $ e $ \cite[II.1.12]{\DeligneRapoport}. If $ C $ is smooth, then the multiplication by $ n $ map $ [n]:C\to C $ is finite flat map of degree $ n^2 $ \cite[II.1.18]{\DeligneRapoport}, whose kernel we denote by $ C[n] $. If $ n $ is invertible in the ground scheme $ S $, the kernel $ C[n] $ is finite \'etale over $ S $, and \'etale locally isomorphic to $ (\Z/n)^2 $. A level $ n $ structure is then a choice of an isomorphism $ \phi:(\Z/n)^2\to C[n] $ \cite[IV.2.3]{\DeligneRapoport}. We denote the moduli stack (implicitly understood as an object over $ \Spec\Z[1/n] $) which classifies smooth elliptic curves equipped with a level $ n $ structure by $ \Ml{n}^0 $. In order to give a modular description of the compactification $ \Ml{n} $ of $ \Ml{n}^0 $, we need to talk about so-called N\'eron polygons. We recall the definitions from \cite[II.1]{\DeligneRapoport}. A \emph{(N\'eron) $ n $-gon} is a curve over an algebraically closed field isomorphic to the standard $ n $-gon, which is the quotient of $ \P^1\times \Z/n$ obtained by identifying the infinity section of the $ i $-th copy of $ \P^1 $ with the zero section of the $ (i+1) $-st. A curve $ p:C \to S $ is a \emph{stable curve of genus one} if its geometric fibers are either smooth, proper, and connected, or N\'eron polygons. A \emph{generalized elliptic curve} is a stable curve of genus one, equipped with a morphism $ +: C^{sm}\cross{S} C\to C$ which \begin{enumerate} \item[(a)] restricts to the smooth locus $ C^{sm} $ of $ C $, making it into a commutative group scheme, and \item[(b)] gives an action of the group scheme $ C^{sm} $ on $ C $, which on the singular geometric fibers of $ C $ is given as a rotation of the irreducible components. \end{enumerate} Given a generalized elliptic curve $p:C\to S $, there is a locally finite family of disjoint closed subschemes $ \left(S_n\right)_{n\geq 1} $ of $ S $, such that $ C $ restricted to $ S_n $ is isomorphic to the standard $ n $-gon, and the union of all $ S_n $'s is the image of the singular locus $ C^{sing} $ of $ C $ \cite[II.1.15]{\DeligneRapoport}. The morphism $ n:C^{sm}\to C^{sm} $ is again finite and flat, and if $ C $ is an $ m $-gon, the kernel $ C[n] $ is \'etale locally isomorphic to $ (\mu_n\times \Z/(n,m)) $ \cite[II.1.18]{\DeligneRapoport}. In particular, if $ C $ a generalized elliptic curve whose geometric fibers are either smooth or $ n $-gons, then the scheme of $ n $-torsion points $ C[n] $ is \'etale locally isomorphic to $ (\Z/n)^2 $. These curves give a modular interpretation of the compactification $ \Ml{n} $ of $ \Ml{n}^0 $. The moduli stack $ \Ml{n} $ over $ \Z[1/n] $ classifies generalized elliptic curves with geometric fibers that are either smooth or $ n $-gons, equipped with a level $ n $ structure, i.e. an isomorphism $ \varphi:(Z/n)^2\to C[n] $ \cite[IV.2.3]{\DeligneRapoport}. Note that $ \Gl $ acts on $ \Ml{n} $ on the right by pre-composing with the level structure, that is, $g: (C,\varphi) \mapsto (C,\varphi\circ g)$. If $ C $ is smooth, this action is free and transitive on the set of level $ n $ structures. If $ C $ is an $ n $-gon, then the stabilizer of a given level $n $ structure is a subgroup of $ \Gl $ conjugate to the upper triangular matrices $U=\left\{\begin{pmatrix} 1& *\\0 & 1 \end{pmatrix}\right\}$. The forgetful map $ \Ml{n}^0\to \M^0[1/n] $ extends to the compactifications, where it is given by forgetting the level structure and contracting the irreducible components that do not meet the identity section \cite[IV.1]{\DeligneRapoport}. The resulting map $ q:\Ml{n}\to \M[1/n] $ is a finite flat cover of $ \M[1/n] $ of degree $ |GL_2(\Z/n)| $. Moreover, the restriction of $ q $ to the locus of smooth curves is an \'etale $ \Gl $-torsor, and over the locus of singular curves, $ q $ is ramified of degree $ n $ \cite[A1.5]{\Katz}. In fact, at the ``cusps" the map $ q $ is given as \[ q:\Ml{n}^{\infty}\cong\Hom_{U}(\Gl,\M^{\infty})\to \M^{\infty}.\] Note that level $ 1 $ structure on a generalized elliptic curve $ C/S $ is nothing but the specified identity section $ e: S\to C $. Thus we can think of $ \M $ as $ \Ml{1} $. The objects $ \Ml{n} $, $ n\geq 1 $, come equipped with ($\Z$-)graded sheaves, the tensor powers $ \om{\Ml{n}}{*} $ of the sheaf of invariant differentials \cite[I.2]{\DeligneRapoport}. Given a generalized elliptic curve $ p:C\rightleftarrows S: e $, let $ \Sh{I} $ be the ideal sheaf of the closed embedding $ e $. The fact that $ C $ is nonsingular at $ e $ implies that the map $ p_*(\Sh{I}/\Sh{I}^2) \to p_*\omega_{C/S}$ is an isomorphism \cite[II.8.12]{\HartshorneAG}. Denote this sheaf on $ S $ by $ \omega_C $. It is locally free of rank one, because $ C $ is a curve over $ S $, with a potential singularity away from $ e $. The sheaf of invariant differentials on $ \Ml{n} $ is then defined by \[ \omega_{\Ml{n}}(S\xrightarrow{C}\Ml{n})=\omega_C. \] It is a quasi-coherent sheaf which is an invertible line bundle on $ \Ml{n} $. The ring of modular forms with level $ n $ structures is defined to be the graded ring of global sections \[ MF(n)_*=H^0(\Ml{n},\om{\Ml{n}}{*}), \] where, as usual, we denote $ MF(1)_* $ simply by $ MF_* $. \section{Topological Modular Forms and Level Structures}\label{sec:dermoduli} By the obstruction theory of Goerss-Hopkins-Miller, as well as work of Lurie, the moduli stack $\M$ has been upgraded to a derived Deligne-Mumford stack, in such a way that the underlying ordinary geometry is respected. Namely, a proof of the following theorem can be found in Mark Behrens' notes \cite{\ConstructTMF}. \begin{thm}[Goerss-Hopkins-Miller, Lurie]\label{ConstructTMF} The moduli stack $ \M $ admits a sheaf of $ E_\infty $-rings $ \Sh{O}^{top} $ on its \'etale site which makes it into a derived Deligne-Mumford stack. For an \'etale map $ \Spec R\to \M $ classifying a generalized elliptic curve $ C/R $, the sections $ \Sh{O}^{top}(\Spec R) $ form an even weakly periodic ring spectrum $ E $ such that \begin{enumerate} \item[(a)] $ \pi_0E = R $, and \item[(b)] the formal group $ G_E $ associated to $ E $ is isomorphic to the completion $ \hat{C} $ at the identity section. \end{enumerate} Moreover, there are isomorphisms of quasi-coherent $\pi_0 \Sh{O}^{top} $-modules $ \pi_{2k}\Sh{O}^{top} \cong \om{\M}{k}$ and $ \pi_{2k+1}\Sh{O}^{top}\cong 0 $ for all integers $ k $. \end{thm} The spectrum of topological modular forms $ Tmf $ is defined to be the $ E_\infty $-ring spectrum of global sections $ R\Gamma(\Sh{O}^{top}) $. We remark that inverting $ 6 $ kills all torsion in the cohomology of $ \M $ as well as the homotopy of $ Tmf $. In that case, following the approach of this paper would fairly formally imply Anderson self-duality for $ Tmf[1/6] $ from the fact that $ \M $ has Grothendieck-Serre duality. To understand integral duality on the derived moduli stack of generalized elliptic curves, we would like to use the strategy of descent, dealing separately with the $ 2 $ and $ 3 $-torsion. The case when $ 2 $ is invertible captures the $ 3 $-torsion, is more tractable, and is thoroughly worked out in this paper by using the smallest good cover by level structures, $ \Ml{2} $. The $ 2 $-torsion phenomena involve computations that are more daunting and will be dealt with subsequently. However, the same methodology works to give the required self-duality result. To begin, we need to lift $ \Ml{2} $ and the covering map $ q:\Ml{2}\to\M $ to the setting of derived Deligne-Mumford stacks. We point out that this is not immediate from Theorem \ref{ConstructTMF} because the map $ q $ is not \'etale. However, we will explain how one can amend the construction to obtain a sheaf of $ E_\infty $-rings $ \Sh{O}(2)^{top} $ on $ \Ml{2} $. We will also incorporate the $ GL_2(\Z/2) $-action, which is crucial for our result. We will in fact sketch an argument based on Mark Behrens' \cite{\ConstructTMF} to construct sheaves $ \Sh{O}(n)^{top} $ on $ \Ml{n}[1/2n] $ for any $ n $, as the extra generality does not complicate the solution. As we remarked earlier, the restriction of $q $ to the smooth locus $\Ml{n}^{0}$ is an \'etale $\Gl$-torsor, hence we automatically obtain $\Sh{O}(n)|_{\Ml{n}^{0}}$ together with its $\Gl$-action. We will use the Tate curve and $ K(1) $-local obstruction theory to construct the $ E_\infty $-ring of sections of the putative $ \Sh{O}(n)^{top} $ in a neighborhood of the cusps, and sketch a proof of the following theorem. \begin{thm}[Goerss-Hopkins] The moduli stack $ \Ml{n} $ (as an object over $ \Z[1/2n] $) admits a sheaf of even weakly periodic $ E_\infty $-rings $ \Sh{O}(n)^{top} $ on its \'etale site which makes it into a derived Deligne-Mumford stack. There are isomorphisms of quasi-coherent $ \pi_0\Sh{O}(n)^{top} $-modules $ \pi_{2k}\Sh{O}(n)^{top}\cong \om{\Ml{n}}{k} $ and $ \pi_{2k-1}\Sh{O}(n)^{top}\cong 0 $ for all integers $ k $. Moreover, the covering map $ q:\Ml{n}\to \M[1/2n] $ is a map of derived Deligne-Mumford stacks. \end{thm} \subsection{Equivariant $ K(1) $-local Obstruction Theory} This is a combination of the Goerss-Hopkins' \cite{\GoerssHopkins, \GH} and Cooke's \cite{\Cooke} obstruction theories, which in fact is contained although not explicitly stated in \cite{\GoerssHopkins,\GH}. Let $ G $ be a finite group; a $ G $-equivariant $ \theta $-algebra is an algebraic model for the $ p $-adic $ K $-theory of an $ E_\infty $ ring spectrum with an action of $ G $ by $ E_\infty $-ring maps. Here, $ G $-equivariance means that the action of $ G $ commutes with the $ \theta $-algebra operations. As $ G $-objects are $ G $-diagrams, the obstruction theory framework of \cite{\GH} applies. Let $ H^s_{G-\theta}(A,B[t]) $ denote the $ s $-th derived functor of derivations from $ A $ into $ B[t] $\footnote{For a graded module $ B $, $ B[t] $ is the shifted graded module with $ B[t]_k=B_{t+k}$.} in the category of $ G $-equivariant $ \theta $-algebras. These are the $ G $-equivariant derivatons from $ A $ into $ B[t] $, and there is a composite functor spectral sequence \begin{align}\label{GObstr} H^r(G, H^{s-r}_\theta(A,B[t]))\Rightarrow H^s_{G-\theta}(A,B[t]). \end{align} \begin{thm}[Goerss-Hopkins]\label{Obstruct} \begin{enumerate} \item[\textit{(a)}] Given a $ G-\theta $-algebra $ A $, the obstructions to existence of a $ K(1) $-local even-periodic $ E_\infty $-ring $ X $ with a $ G $-action by $ E_\infty $-ring maps, such that $ K_*X\cong A $ (as $ G-\theta $-algebras) lie in \[ H^s_{G-\theta}(A,A[-s+2]) \text{ for } s\geq 3.\]The obstructions to uniqueness like in \[ H^s_{G-\theta}(A,A[-s+1]) \text{ for } s\geq 2. \] \item[\textit{(b)}] Given $ K(1) $-local $ E_\infty $-ring $ G $-spectra $ X$ and $ Y $ whose $ K $-theory is $ p $-complete, and an equivariant map of $ \theta $-algebras $ f_*:K_*X\to K_* Y $, the obstructions to lifting $ f_* $ to an equivariant map of $ E_\infty $-ring spectra lie in \[ H^s_{G-\theta}(K_*X,K_*Y[-s+1]), \text{ for } s\geq 2, \] while the obstructions to uniqueness of such a map lie in \[ H^s_{G-\theta}(K_*X,K_*Y[-s]), \text{ for } s\geq 1. \] \item[\textit{(c)}] Given such a map $ f:X\to Y $, there exists a spectral sequence \[ H^s_{G-\theta}(K_*X,K_*Y[t])\Rightarrow \pi_{-t-s}(E_\infty(X,Y)^G,f), \] computing the homotopy groups of the space of equivariant $ E_\infty $-ring maps, based at $ f $. \end{enumerate} \end{thm} \subsection{The Igusa Tower} Fix a prime $ p>2 $ which does not divide $ n $. Let $ \M_p $ denote the $ p $-completion of $ \M $, and let $\Msing$ denote a formal neighborhood of the cusps of $\M_{p}$. We will use the same embellishments for the moduli with level structures. The idea is to use the above Goerss-Hopkins obstruction theory to construct the $ E_\infty $-ring of sections over $ \Ml{n}^\infty $ of the desired $ \Sh{O}(n)^{top} $, from the algebraic data provided by the Igusa tower, which will supply an equivariant $ \theta $-algebra. We will consider the moduli stack $ \Ml{n} $ as an object over $ \Z[1/n,\zeta] $, for $ \zeta $ a primitive $ n $-th root of unity. The structure map $ \Ml{n}\to \Spec \Z[1/n,\zeta] $ is given by the Weil pairing \cite[5.6]{\KatzMazur}. The structure of $ \M^\infty $ as well as $ \Ml{n}^\infty $ is best understood by the Tate curves. We already mentioned that the singular locus is given by N\'eron $ n $-gons; the $ n $-Tate curve is a generalized elliptic curve in a neighborhood of the singular locus. It is defined over the ring $ \Z[[q^{1/n}]] $, so that it is smooth when $ q $ is invertible and a N\'eron $ n $-gon when $ q $ is zero. For details of the construction, the reader is referred to \cite[VII]{\DeligneRapoport}. >From \cite[VII]{\DeligneRapoport} and \cite[Ch 10]{\KatzMazur}, we learn that $ \Msing=\Spf \Z_p[[q]] $, and that $ \displaystyle{\Mlsing{n}=\coprod_{cusps}\Spf\Z_p[\zeta][[q^{1/n}]]} $. The $ \Gl $-action on $ \Mlsing{n} $ is understood by studying level structures on Tate curves, and is fully described in \cite[Theorem 10.9.1]{\KatzMazur}. The group $ U $ of upper-triangular matrices in $ \Gl $ acts on $B(n)= \Z_p[\zeta][[q^{1/n}]] $ by the roots of unity: $\left(\begin{array}{cc} 1 & a \\ 0 & 1 \end{array}\right)\in U$ sends $ q^{1/n} $ to $ \zeta^a q^{1/n} $. Note that then, the inclusion $ \Z_p[[q]]\to B(n) $ is $ U $-equivariant, and $ \Mlsing{n} $ is represented by the induced $ \Gl $-module $ A(n)=\Hom_U(\Gl,B(n)) $. Denote by $ \Mlsing{p^k,n} $ a formal neighborhood of the cusps in the moduli stack of generalized elliptic curves equipped with level structure maps \begin{align*} \eta:\mu_{p^k}\xrightarrow{\sim} C[p^k]\\ \varphi:(\Z/n)^2\xrightarrow{\sim} C[n]. \end{align*} Note that as we are working in a formal neighborhood of the singular locus, the curves $ C $ classified by $ \Ml{p^k,n}^\infty $ have ordinary reduction modulo $ p $. As $ k $ runs through the positive integers, we obtain an inverse system called the Igusa tower, and we will write $ \Ml{p^\infty,n} $ for the inverse limit. It is the $ \Z_p^\times $-torsor over $ \Mlsing{n} $ given by the formal affine scheme $ \Spf\Hom(\Z_p^\times,A(n)) $ \cite[Theorem 12.7.1]{\KatzMazur}. The Tate curve comes equipped with a canonical invariant differential, which makes $ \om{\Mlsing{n}}{} $ isomorphic to the line bundle associated to the graded $ A(n) $-module $ A(n)[1] $. We define $ A(n)_* $ to be the evenly graded $ A(n) $-module, which in degree $ 2t $ is $ H^0(\Mlsing{n},\om{\Mlsing{n}}{t})\cong A(n)[t] $. Similarly we define $ A(p^\infty,n)_* $. The modules $A(p^\infty,n)_*= \Hom(\Z_p^\times,A(n)_*) $ have a natural $\Gl$-equivariant $\theta $-algebra structure, with operations coming from the following maps \begin{align*} (\psi^k)^*:\Ml{p^\infty, n}&\to \Ml{p^\infty,n}\\ (C,\eta,\varphi)&\mapsto (C,\eta\circ[k],\varphi),\\ (\psi^p)^*:\Ml{p^\infty, n}&\to \Ml{p^\infty,n}\\ (C,\eta,\varphi)&\mapsto (C^{(p)},\eta^{(p)},\varphi^{(p)}). \end{align*} Here, $ C^{(p)} $ is the elliptic curve obtained from $ C $ by pulling back along the absolute Frobenius map on the base scheme. The multiplication by $ p $ isogeny $ [p]:C\to C $ factors through $ C^{(p)} $ as the relative Frobenius $ F:C\to C^{(p)} $ followed by the Verschiebung map $ V:C^{(p)}\to C $ \cite[12.1]{\KatzMazur}. The level structure $ \varphi^{(p)}:(\Z/n)^2\to C^{(p)}$ is the unique map making the diagram \[ \xymatrix{ & (\Z/n)^2 \ar[d]^{\varphi}\ar@{-->}[ld]_{\varphi^{(p)}}\\ C^{(p)} \ar[r]^-V &C }\] commute. Likewise, $ \eta^{(p)} $ is the unique level structure making an analogous diagram commute. More details on $ \eta^{(p)} $ are in \cite[Section 5]{\ConstructTMF}. The operation $ \psi^k $ comes from the action of $ \Z_p^\times $ on the induced module $ A(p^\infty,n)_*= \Hom(\Z_p^\times,A(n)_*) $. We point out that from this description of the operations, it is clear that the $ \Gl $-action commutes with the operations $ \psi^k $ and $ \psi^p $, which in particular gives an isomorphism \begin{align}\label{EqInd} A(p^\infty, n)_* \cong \Hom_U\left(G,\Hom\left(\Z_p^\times,B(n)_*\right) \right). \end{align} Denote by $ B(p^\infty,n)_* $ the $ \theta $-algebra $ \Hom(\Z_p^\times, B(n)_* ) $. As a first step, we apply (a) of Theorem \ref{Obstruct} to construct even-periodic $ K(1) $-local $ \Gl $-equivariant $ E_\infty $-ring spectra $ Tmf(n)^\infty_p $ whose $ p $-adic $ K $-theory is given by $ A(p^\infty, n)_* $. The starting point is the input to the spectral sequence \eqref{GObstr}, the group cohomology \begin{align}\label{ss:GObstruct} H^r\left(\Gl, H^{s-r}_\theta\left(A(p^\infty,n)_*,A(p^\infty,n)_*\right)\right ). \end{align} \begin{rem}\label{rm:TateKthry} Lemma 7.5 of \cite{\ConstructTMF } implies that $ H^s_\theta(B(p^\infty,n)_*,B(p^\infty,n)_*)=0 $ for $ s>1 $, from which we deduce the existence of a unique $ K(1) $-local weakly even periodic $ E_\infty $-ring spectrum that we will denote by $ K[[q^{1/n}]] $, whose $ K $-theory is the $ \theta $-algebra $ B(p^\infty,n)_* $. This spectrum $ K[[q^{1/n}]] $ should be thought of as the sections of $ \Sh{O}(n)^{top} $ over a formal neighborhood of a single cusp of $ \Ml{n} $. \end{rem} By the same token, we also know that $ H^s_\theta(A(p^\infty,n)_*,A(p^\infty,n)_*)=0 $ for $ s>1 $, and \[ H^0_{\theta}\left(A(p^\infty,n)_*,A(p^\infty,n)_*\right)\cong\Hom_U\left(\Gl,H^0_\theta\left(A(p^\infty,n)_*,B(p^\infty,n)_* \right)\right). \] Thus the group cohomology \eqref{ss:GObstruct} is simply \[ H^r\left(U, H^0_\theta\left(A(p^\infty,n)_*,B(p^\infty,n)_*\right)\right) \] which is trivial for $ r>0 $ as the coefficients are $ p $-complete, and the group $ U $ has order $ n $, coprime to $ p $. Therefore, the spectral sequence \eqref{GObstr} collapses to give that \[ H^s_{\Gl-\theta}\left(A(p^\infty,n)_*,A(p^\infty,n)_* \right) = 0, \text{ for } s>0.\] Applying Theorem \ref{Obstruct} now gives our required $ \Gl $-spectra $ Tmf(n)^\infty_p $. A similar argument produces a $ \Gl $-equivariant $ E_\infty $-ring map \[ q^\infty: Tmf^\infty_p\to Tmf(n)^\infty_p,\]where $ Tmf^\infty $ is given the trivial $ \Gl $-action. \begin{proposition}\label{prop:htpyfixed} The map $ q^\infty: Tmf^\infty_p\to Tmf(n)^\infty_p $ is the inclusion of homotopy fixed points. \end{proposition} \begin{proof} Note that from our construction it follows that $ Tmf(n)^\infty_p $ is equivalent to $ \Hom_U(\Gl,K[[q^{1/n}]]) $, where $ K[[q^{1/n}]] $ has the $ U $-action lifting the one we described above on its $ \theta $-algebra $ B(n)_* $. (This action lifts by obstruction theory to the $ E_\infty $-level because the order of $ U $ is coprime to $ p $.) Since $ Tmf^\infty_p $ has trivial $ \Gl $-action, the map $ q^\infty $ factors through the homotopy fixed point spectrum \begin{align*} \left( Tmf(n)^\infty_p \right)^{h\Gl} \cong K[[q^{1/n}]]^{hU}. \end{align*} So we need that the map $ q': Tmf^\infty = K[[q]] \to K[[q^{1/n}]]^{hU}$ be an equivalence. The homotopy groups of $ K[[q^{1/n}]]^{hU} $ are simply the $ U $-invariant homotopy in $ K[[q^{1/n}]] $, because $ U $ has no higher cohomology with $ p $-adic coefficients. Thus $\pi_* q' $ is an isomorphism, and the result follows. \end{proof} \subsection{Gluing} We need to patch these results together to obtain the sheaf $ {\Sh{O}}(n)^{top} $ on the \'etale site of $ \Ml{n} $. To construct the presheaves $ \tilde{\Sh{O}}(n)^{top}_p $ on the site of affine schemes \'etale over $ \Ml{n} $, one follows the procedure of \cite[Step 2, Sections 7 and 8]{\ConstructTMF}. Thus for each prime $ p>2 $ which does not divide $ n $, we have $ \tilde{\Sh{O}}(n)_p $ on $ \Ml{n}_p $. As in \cite[Section 9]{\ConstructTMF}, rational homotopy theory produces a presheaf $ \tilde{\Sh{O}}(n)_{\Q} $. These glue together to give $ \tilde{\Sh{O}}(n) $ (note, $ 2n $ will be invertible in $ \tilde{\Sh{O}}(n) $). By construction, the homotopy group sheaves of $ \tilde{\Sh{O}}(n)^{top} $ are given by the tensor powers of the sheaf of invariant differentials $ \om{\Ml{n}}{} $, which is a quasi-coherent sheaf on $ \Ml{n} $. Section $2$ of \cite{\ConstructTMF} explains how this data gives rise to a sheaf $ \Sh{O}(n)^{top} $ on the \'etale site of $ \Ml{n} $, so that the $ E_\infty $-ring spectrum $ Tmf(n) $ of global sections of $ \Sh{O}(n)^{top} $ can also be described as follows. Denote by $ TMF(n) $ the $ E_\infty $-ring spectrum of sections $R\Gamma\left(\Sh{O}(n)^{top}|_{\Ml{n}^0}\right) $ over the locus of smooth curves. Let $ TMF(n)^\infty $ be the the $ E_\infty $-ring of sections of $ \Sh{O}(n)^{top}|_{\Ml{n}^0} $ in a formal neighborhood of the cusps. Both have a $ \Gl $-action by construction. The equivariant obstruction theory of Theorem \ref{GObstr} can be used again to construct a $ \Gl $-equivariant map $ TMF(n)_{K(1)} \to TMF(n)^\infty_p$ that refines the $ q $-expansion map; again, the key point as that $ K_* (TMF(n)^\infty_p )$ is a $ U $-induced $ \Gl $-module. Pre-composing with the $ K(1)$-localization map, we get a $ \Gl $-$ E_\infty $-ring map $ TMF(n)\to TMF(n)^\infty $. We build $ Tmf(n)_p $ as a pullback \begin{align*} \xymatrix{ Tmf(n)_p \ar[r]\ar[d] & Tmf(n)^\infty_p\ar[d] \\ TMF(n)_p \ar[r] & TMF(n)^{\infty}_p} \end{align*} All maps involved are $ \Gl $-equivariant maps of $ E_\infty $-rings, so $ Tmf(n) $ constructed this way has a $\Gl $-action as well. \section{Descent and Homotopy Fixed Points} We have remarked several times that the map $ q^0:\Ml{n}^0\to \M^0 $ is a $ \Gl $-torsor, thus we have a particularly nice form of \'etale descent. On global sections, this statement translates to the equivalence \[ TMF[1/2n] \to TMF(n)^{h\Gl}.\] The remarkable fact is that this property goes through for the compactified version as well. \begin{thm}\label{thm:tmffixedpt} The map $ Tmf[1/2n]\to Tmf(n)^{h\Gl} $ is an equivalence. \end{thm} \begin{proof} This is true away from the cusps, but by Proposition \ref{prop:htpyfixed}, it is also true near the cusps. We constructed $ Tmf(n) $ from these two via pullback diagrams, and homotopy fixed points commute with pullbacks. \end{proof} \begin{rem} For the rest of this paper, we will investigate $ Tmf(2) $, the spectrum of topological modular forms with level $ 2 $ structure. Note that this spectrum differs from the more commonly encountered $ TMF_0(2) $, which is the receptacle for the Ochanine genus \cite{\Ochanine}, as well as the spectrum appearing in the resolution of the $ K(2) $-local sphere \cite{\GHMR,\Behrens}. The latter is obtained by considering isogenies of degree $ 2 $ on elliptic curves, so-called $ \Gamma_0(2) $ structures. \end{rem} \section{Level $ 2 $ Structures Made Explicit}\label{sec:leveltwo} In this section we find an explicit presentation of the moduli stack $ \Ml{2} $. Let $ E/S $ be a generalized elliptic curve over a scheme on which $ 2 $ is invertible, and whose geometric fibers are either smooth or have a nodal singularity (i.e. are N\'eron $ 1 $-gons). Then Zariski locally, $ E $ is isomorphic to a Weierstrass curve of a specific and particularly simple form. Explicitly, there is a cover $ U\to S $ and functions $ x,y $ on $ U $ such that the map $U\to\P^2_U $ given by $ [x,y,1] $ is an isomorphism between $ E_U=E\cross{S}U $ and a Weierstrass curve in $ \P^2_U $ given by the equation \begin{align}\label{Eb} E_b:\quad y^2=x^3+\frac{b_2}{4}x^2+\frac{b_4}{2}x+\frac{b_6}{4}=:f_b(x), \end{align} such that the identity for the group structure on $ E_U $ is mapped to the point at infinity $ [0,1,0] $\cite[III.3]{\Silverman},\cite{\KatzMazur}. Any two Weierstrass equations for $ E_U $ are related by a linear change of variables of the form \begin{equation}\label{transf} \begin{split} x&\mapsto u^{-2}x+r\\ y&\mapsto u^{-3}y. \end{split} \end{equation} The object which classifies locally Weierstrass curves of the form \eqref{Eb}, together with isomorphisms which are given as linear change of variables \eqref{transf}, is a stack $ \M_{weier}[1/2] $, and the above assignment $ E\mapsto E_b $ of a locally Weierstrass curve to an elliptic curve defines a map $ w: \M[1/2]\to\M_{weier}[1/2] $. The Weierstrass curve \eqref{Eb} associated to a generalized elliptic curve $ E $ over an algebraically closed field has the following properties: $ E $ is smooth if and only if the discriminant of $ f_b(x) $ has no repeated roots, and $ E $ has a nodal singularity if and only if $ f_b(x) $ has a repeated but not a triple root. Moreover, non-isomorphic elliptic curves cannot have isomorphic Weierstrass presentations. Thus the map $ w: \M[1/2]\to\M_{weier}[1/2] $ injects $ \M[1/2] $ into the open substack $ U(\Delta) $ of $ \M_{weier}[1/2] $ which is the locus where the discriminant of $ f_b $ has order of vanishing at most one. Conversely, any Weierstrass curve of the form \eqref{Eb} has genus one, is smooth if and only if $ f_b(x) $ has no repeated roots, and has a nodal singularity whenever it has a double root, so $ w:\M[1/2]\to U(\Delta) $ is also surjective, hence an isomorphism. Using this and the fact that points of order two on an elliptic curve are well understood, we will find a fairly simple presentation of $ \Ml{2} $. The moduli stack of locally Weierstrass curves is represented by the Hopf algebroid \[ (B=\Z[1/2][b_2,b_4,b_6],B[u^{\pm 1},r]).\] Explicitly, there is a presentation $ \Spec B\to \M_{weier}[1/2]$, such that \[ \Spec B \cross{\M_{weier}[1/2]}\Spec B=\Spec B[u^{\pm 1},r]. \] The projection maps to $ \Spec B $ are $ \Spec $ of the inclusion of $ B $ in $ B[u^{\pm 1}] $ and $ \Spec $ of the map \begin{align*} b_2&\mapsto u^2(b_2+12r)\\ b_4&\mapsto u^4(b_4+rb_2+6r^2)\\ b_6&\mapsto u^6(b_6+2rb_4+r^2b_2+4r^3) \end{align*} which is obtained by plugging in the transformation \eqref{transf} into \eqref{Eb}. In other words, $ \M_{weier}[1/2] $ is simply obtained from $ \Spec B $ by enforcing the isomorphisms that come from the change of variables \eqref{transf}. Suppose $ E/S $ is a smooth elliptic curve which is given locally as a Weierstrass curve \eqref{Eb}, and let $\phi:(\Z/2)^2\to E$ be a level $ 2 $ structure. For convenience in the notation, define $e_0=\binom{1}{1},e_1=\binom{1}{0},e_2=\binom{0}{1}\in(\Z/2)^2$. Then $\phi(e_i)$ are all points of exact order $2$ on $E$, thus have $ y $-coordinate equal to zero since $ [-1](y,x)=(-y,x) $ (\cite[III.2]{\Silverman}) and \eqref{Eb} becomes \begin{align}\label{Ex} y^2=(x-x_0)(x-x_1)(x-x_2), \end{align} where $x_i=x(\phi(e_i))$ are all different. If $ E $ is a generalized elliptic curve which is singular, i.e. $ E $ is a N\'eron $ 2 $-gon, then a choice of level $ 2 $ structure makes $ E $ locally isomorphic to the blow-up of \eqref{Ex} at the singularity (seen as a point of $ \P^2 $), with $ x_i=x_j\neq x_k $, for $ \{i,j,k\}=\{0,1,2\} $. So let $A=\Z[1/2][x_0,x_1,x_2]$, let $ L $ be the line in $ \Spec A $ defined by the ideal $(x_0-x_1,x_1-x_2,x_2-x_0)$, and let $\Spec A-L$ be the open complement. The change of variables \eqref{transf} translates to a $ (\G_a\rtimes\G_m) $-action on $ \Spec A $ that preserves $ L $ and is given by: \begin{align*} x_i&\mapsto u^2(x_i-r). \end{align*} Consider the isomorphism $\psi:(\Spec A-L)\xrightarrow{\sim}(\A^2-\{0\})\times\A^1$: \[ (x_0,x_1,x_2)\mapsto\left( (x_1-x_0,x_2-x_0),x_0\right). \] We see that $\G_a$ acts trivially on the $(\A^2-\{0\})$-factor, and freely by translation on $\A^1$. Therefore the quotient $(\Spec A-L)//\G_a$ is \[\Mc2=\A^2-\{0\}=\Spec\Z[1/2][\lambda_1,\lambda_2]-\{0\},\]the quotient map being $\psi$ composed with the projection onto the first factor. This corresponds to choosing coordinates in which $E$ is of the form: \begin{equation}\label{Elambda} y^2=x(x-\lambda_1)(x-\lambda_2). \end{equation} The $ \G_m $-action is given by grading $ A $ as well as $ \Lambda=\Z[1/2][\lambda_1,\lambda_2] $ so that the degree of each $ x_i $ and $ \lambda_i $ is $ 2 $. It follows that $\Ml{2}=\Mc2//\G_m$ is the weighted projective line $\Proj\Lambda=(\Spec\Lambda-\{0\})//\G_m$. Note that we are taking homotopy quotient which makes a difference: $ -1 $ is a non-trivial automorphism on $ \Ml{2} $ of order $ 2 $. The sheaf $ \omega_{\Ml{2}} $ is an ample invertible line bundle on $ \Ml{2} $, locally generated by the invariant differential $\displaystyle{\eta_{E_\lambda}= \frac{dx}{2y} }$. From \eqref{transf} we see that the $ \G_m=\Spec\Z[u^{\pm 1}] $-action changes $ \eta_{E_\lambda}$ to $ u\eta_{E_\lambda} $. Hence, $ \omega_{\Ml{2}} $ is the line bundle on $ \Ml{2}=\Proj\Lambda $ which corresponds to the shifted module $ \Lambda[1] $, standardly denoted by $ \Sh{O}(1) $. We summarize the result. \begin{proposition}\label{M2stack} The moduli stack of generalized elliptic curves with a choice of a level $ 2 $ structure $ \Ml{2}$ is isomorphic to $\Proj\Lambda=(\Spec\Lambda-\{0\})//\G_m $, via the map $ \Ml{2} \to\Proj\Lambda$ which classifies the sheaf of invariant differentials $ \omega_{\Ml{2}} $on $ \Ml{2} $. The universal curve over the locus of smooth curves $ \Ml{2}^0 =\Proj\Lambda-\{0,1,\infty\}$ is the curve of equation \eqref{Elambda}. The fibers at $ 0$, $1$, and $\infty $, are N\'eron $ 2 $-gons obtained by blowing up the singularity of the curve \eqref{Elambda}. \end{proposition} \begin{rem}\label{M2gr} As specifying a $ \G_m $-action is the same as specifying a grading, we can think of the ringed space $ (\Ml{2},\om{\Ml{2}}{*}) $ as the ringed space $ (\Mc2=\Spec\Lambda-\{0\},\SSh{\Mc2}) $ together with the induced grading. \end{rem} Next we proceed to understand the action of $ GL_2(\Z/2) $ on the global sections $ H^0(\Ml{2},\om{\Ml{2}}{*})=\Lambda $. By definition, the action comes from the natural action of $ GL_2(\Z/2) $ on $ (\Z/2)^2 $ and hence by pre-composition on the level structure maps $ \phi:(\Z/2)^2\to E[2] $. If we think of $ GL_2(\Z/2) $ as the symmetric group $ S_3 $, then this action permutes the non-zero elements $ \{e_0,e_1,e_2\} $ of $ (\Z/2)^2 $, which translates to the action on \[ H^0(\Spec A-L,\SSh{\Spec A-L})=\Z[x_0,x_1,x_2], \] given as $g\cdot x_i=x_{gi}$ where $g\in S_3=\Perm\{0,1,2\}$. The map on $H^0$ induced by the projection $(\Spec A-L)\to \Mc2$ is \begin{align*} \Z[\lambda_1,\lambda_2]&\to\Z[x_0,x_1,x_2]\\ \lambda_i&\mapsto x_i-x_0 \end{align*} Therefore, we obtain that $g\lambda_i$ is the inverse image of $x_{gi}-x_{g0}$. That is, $g\lambda_i=\lambda_{gi}-\lambda_{g0}$, where we implicitly understand that $\lambda_0=0$. We have proven the following lemma. \begin{lemma}\label{ActionLambda} Choose the generators of $S_3=\Perm\{0,1,2\}$, $\sigma=(012)$ and $\tau=(12)$. Then the $ S_3 $ action on $ \Lambda=H^0(\Ml{2},\om{\Ml{2}}{*}) $ is determined by \begin{equation*} \begin{aligned} \tau:\quad&\lambda_1\mapsto\lambda_2 &\sigma:\quad&\lambda_1\mapsto\lambda_2-\lambda_1\\ &\lambda_2\mapsto\lambda_1 & &\lambda_2\mapsto-\lambda_1. \end{aligned} \end{equation*} \end{lemma} This fully describes the global sections $H^0(\Ml{2},\om{\Ml{2}}{*})$ as an $S_3$-module. The action on $H^1(\Ml{2},\om{\Ml{2}}{*})$ is not as apparent and we deal with it using Serre duality \eqref{GSDuality}. \section{(Equivariant) Serre Duality for $ \Ml{2} $} We will proceed to prove Serre duality for $ \Ml{2} $ in an explicit manner that will be useful later, by following the standard computations for projective spaces, as in \cite{\HartshorneAG}. To emphasize the analogy with the corresponding statements about the usual projective line, we might write $ \Proj\Lambda $ and $ \Sh{O}(*) $ instead of $ \Ml{2} $ and $ \om{\Ml{2}}{*} $, in view of Remark \eqref{M2gr}. Also remember that for brevity, we might omit writing $ 1/2 $. \begin{proposition}\label{M2cohomology} The cohomology of $ \Ml{2} $ with coefficients in the graded sheaf of invariant differentials $ \om{\Ml{2}}{*} $ is computed as \begin{align*} H^s(\Ml{2},\om{\Ml{2}}{*})=\begin{cases} \Lambda,&s=0\\ \Lambda/(\lambda_1^\infty,\lambda_2^\infty),&s=1\\ 0,&\text{else} \end{cases} \end{align*} where $ \Lambda/(\lambda_1^\infty,\lambda_2^\infty) $ is a torsion $ \Lambda $-module with a $ \Z[1/2] $-basis of monomials $ \frac{1}{\lambda_1^{i}\lambda_2^j} $ for $ i,j $ both positive. \end{proposition} \begin{rem} The module $ \Lambda/(\lambda_1^\infty,\lambda_2^\infty) $ is inductively defined by the short exact sequences \begin{align*} 0\to & \Lambda \xrightarrow{\lambda_1} \Lambda\Bigl[\frac{1}{\lambda_1}\Bigr]\to \Lambda/(\lambda_1^\infty)\to 0\\ 0\to &\Lambda/(\lambda_1^\infty) \xrightarrow{\lambda_2} \Lambda\Bigl[\frac{1}{\lambda_1\lambda_2}\Bigr]\to \Lambda/(\lambda_1^\infty,\lambda_2^\infty)\to 0. \end{align*} \end{rem} \begin{rem} Note that according to \ref{M2gr}, $ H^*(\Ml{2},\om{\Ml{2}}{*})$ is isomorphic to $ H ^ * ( \Mc2, \SSh { \Mc2 } ) $ with the induced grading. It is these latter cohomology groups that we shall compute. \end{rem} \begin{proof} We proceed using the local cohomology long exact sequence \cite[Ch III, ex. 2.3]{\HartshorneAG} for \[ \Mc2\subset \Spec\Lambda \supset \{0\}.\] The local cohomology groups $ R^*\Gamma_{\{0\}}(\Spec\Lambda,\Sh{O})$ are computed via a Koszul complex as follows. The ideal of definition for the point $ \{0\} \in\Spec\Lambda$ is $ (\lambda_1,\lambda_2) $, and the generators $ \lambda_i $ form a regular sequence. Hence, $ R^*\Gamma_{\{0\}}(\Spec\Lambda,\Sh{O}) $ is the cohomology of the Koszul complex \begin{align*} \Lambda \to \Lambda\Bigl[\frac{1}{\lambda_1}\Bigr]\times \Lambda\Bigl[\frac{1}{\lambda_2}\Bigr]\to \Lambda\Bigl[\frac{1}{\lambda_1\lambda_2}\Bigr], \end{align*} which is $ \Lambda/(\lambda_1^\infty,\lambda_2^\infty) $, concentrated in (cohomological) degree two. We also know that $ H^*(\Spec\Lambda,\Sh{O})=\Lambda $ concentrated in degree zero, so that the local cohomology long exact sequence splits into \begin{align*} 0&\to\Lambda\to H^0(\Mc2,\Sh{O})\to 0\\ 0&\to H^1(\Mc2,\Sh{O})\to \Lambda/(\lambda_1^\infty,\lambda_2^\infty)\to 0, \end{align*} giving the result. \end{proof} \begin{lemma}\label{M2Omega} We have the following properties of the sheaf of differentials on $ \Ml{2} $. \begin{enumerate} \item[(a)] There is an isomorphism $ \Om{\Ml{2}}\cong\om{\Ml{2}}{-4} $. \item[(b)] The cohomology group $ H^s(\Ml{2},\Om{\Ml{2}}) $ is zero unless $ s=1 $, and\\ $ H^1(\Ml{2},\Om{\Ml{2}})$ is the sign representation $ \Z_{\sgn}[1/2] $ of $ S_3 $. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(a)] The differential form $\eta= \lambda_1d\lambda_2-\lambda_2d\lambda_1 $ is a nowhere vanishing differential form of degree four, thus a trivializing global section of the sheaf $\Sh{O}(4)\otimes\Om{\Proj\Lambda} $. Hence there is an isomorphism $ \Om{\Proj\Lambda} \cong\Sh{O}(-4)$. \item[(b)] From Proposition \ref{M2cohomology}, $ H^*(\Proj\Lambda,\Omega)$ is $\Z[1/2] $ concentrated in cohomological degree one, and generated by \[ \frac{\eta}{\lambda_1\lambda_2}=\frac{\lambda_1}{\lambda_2}d\Bigl(\frac{\lambda_2}{\lambda_1}\Bigr). \] Any projective transformation $ \varphi $ of $ \Proj\Lambda $ acts on $ H^1(\Proj\Lambda,\Omega) $ by the determinant $ \det\varphi $. By our previous computations, as summarized in Lemma \ref{ActionLambda}, the transpositions of $ S_3 $ act with determinant $ -1 $, and the elements of order $ 3 $ of $ S_3 $ with determinant $ 1 $. Hence the claim. \end{enumerate} \end{proof} We are now ready to state and prove the following result. \begin{thm}[Serre Duality]\label{GSDuality} The sheaf of differentials $ \Om{\Ml{2}} $ is a dualizing sheaf on $ \Ml{2} $, i.e. the natural cup product map \[ H^0(\Ml{2},\om{\Ml{2}}{t})\otimes H^1(\Ml{2},\om{\Ml{2}}{-t}\otimes\Om{\Ml{2}})\to H^1(\Ml{2},\Om{\Ml{2}}), \] is a perfect pairing which is compatible with the $ S_3 $-action. \end{thm} \begin{rem} Compatibility with the $ S_3 $-action simply means that for every $ g\in S_3 $, the following diagram commutes {\small \begin{align*} \xymatrix@C=0.4pc{ H^0(\Ml{2},g^*\om{\Ml{2}}{t})\otimes H^1(\Ml{2},g^*\om{\Ml{2}}{-t}\otimes g^*\Om{\Ml{2}}) \ar[r]\ar[d]_g & H^1(\Ml{2},g^*\Om{\Ml{2}})\ar[d]^g\\ H^0(\Ml{2},\om{\Ml{2}}{t})\otimes H^1(\Ml{2},\om{\Ml{2}}{-t}\otimes\Om{\Ml{2}}) \ar[r]& H^1(\Ml{2},\Om{\Ml{2}})}. \end{align*} } But we have made a choice of generators for $\Lambda= H^0(\Ml{2},\om{\Ml{2}}{*})\cong H^0(\Ml{2},g^*\om{\Ml{2}}{*})$, and we have described the $ S_3 $-action on those generators in Lemma \ref{ActionLambda}. If we think of the induced maps $ g:H^*(\Ml{2},g^*\om{\Ml{2}}{*})\to H^*(\Ml{2},\om{\Ml{2}}{*}) $ as a change of basis action of $ S_3 $, Theorem \ref{GSDuality} states that we have a perfect pairing of $ S_3 $-modules. As a consequence, there is an $ S_3 $-module isomorphism \[ H^1(\Ml{2},\om{\Ml{2}}{*-4})\cong\Hom(\Lambda,\Z_{\sgn}[1/2])=\Lambda^\vee_{\sgn}. \] (The subscript $ \sgn $ will always denote twisting by the sign representation of $ S_3 $.) \end{rem} \begin{proof} Proposition \ref{M2cohomology} and Lemma \ref{M2Omega} give us explicitly all of the modules involved. Namely, $ H^0(\Proj\Lambda,\Sh{O}(*)) $ is free on the monomials $ \lambda_1^i\lambda_2^j $, for $ i,j\geq 0 $, and $ H^1(\Proj\Lambda,\Sh{O}(*)) $ is free on the monomials $ \frac{1}{\lambda_1^i\lambda_2^j }= \frac{1}{\lambda_1^{i-1}\lambda_2^{j-1} }\frac{1}{\lambda_1\lambda_2 }$, for $ i,j>0 $. Lemma \eqref{M2Omega} gives us in addition that $ H^1(\Proj\Lambda,\Sh{O}(*)\otimes\Om{\Ml{2}}) $ is free on $\frac{1}{\lambda_1^{i-1}\lambda_2^{j-1} }\frac{\eta}{\lambda_1\lambda_2 }$. We conclude that \[ (\lambda_1^i\lambda_2^j, \frac{\eta}{\lambda_1^{i+1}\lambda_2^{j+1}})\mapsto \frac{\eta}{\lambda_1\lambda_2} \] is really a perfect pairing. Moreover, this pairing is compatible with any projective transformation $ \varphi $ of $ \Proj\Lambda $, which includes the $ S_3 $-action as well as change of basis. Any such $ \varphi $ acts on $ H^*(\Proj\Lambda,\Sh{O}(*)) $ by a linear change of variables, and changes $ \eta $ by the determinant $ \det\varphi $. Thus the diagram {\small \begin{align*} \xymatrix@C=0.4pc{ H^0(\Ml{2},\varphi^*\om{\Ml{2}}{t}) \otimes H^1(\Ml{2},\varphi^*\om{\Ml{2}}{-t} \otimes \varphi^*\Om{\Ml{2}}) \ar[r]\ar@<-13ex>[d]_{\varphi}\ar@<6ex>[d]^{\varphi\otimes\det\varphi} & H^1(\Ml{2},\varphi^*\Om{\Ml{2}})\ar[d]^{\det\varphi}\\ H^0(\Ml{2},\om{\Ml{2}}{t}) \otimes H^1(\Ml{2},\om{\Ml{2}}{-t}\otimes\Om{\Ml{2}}) \ar[r]& H^1(\Ml{2},\Om{\Ml{2}})} \end{align*} } commutes. \end{proof} We explicitly described the induced action on the global sections $ H^0(\Proj\Lambda,\Sh{O}(*)) = \Lambda $ in Lemma \ref{ActionLambda}, and in \eqref{M2Omega} we have identified $ H^1(\Proj\Lambda,\Om{\Proj\Lambda}) $ with the sign representation $ \Z_{\sgn} $ of $ S_3 $. Therefore, the perfect pairing is the natural map \[\Lambda\otimes\Lambda_{\sgn}^\vee\to\Z_{\sgn}[1/2].\] \section{Anderson Duality for $ Tmf(2) $}\label{sec:andersontmf2} The above Serre duality pairing for $ \Ml{2} $ enables us to compute the homotopy groups of $ Tmf(2) $ as a module over $ S_3 $. We obtain that the $ E_2 $ term of the spectral sequence \eqref{ss:jardine} for $ Tmf(2) $ looks as follows: \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figandersontmf2} \caption{Jardine spectral sequence \eqref{ss:jardine} for $\pi_* Tmf(2) $ }\label{Fig:tmf2} \end{figure} As there is no space for differentials or extensions, we conclude that \[ \pi_*Tmf(2)=\Lambda\oplus\Sigma^{-9}\Lambda^\vee_{\sgn}. \] Even better, we are now able to prove self-duality for $ Tmf(2) $. Since we are working with $ 2 $ inverted everywhere, the Anderson dual of $ Tmf(2) $ is defined by dualizing the homotopy groups as $ \Z[1/2] $-modules, as noted in Remark \ref{rem:Anderson}. Although this duality may deserve the notation $ I_{\Z[1/2]} $, we forbear in the interest of compactness of the notation. Recall that the Anderson dual of $ Tmf(2) $ is the function spectrum $ F(Tmf(2),\IZ) $, so it inherits an action by $ S_3 $ from the one on $ Tmf(2) $. \begin{thm}\label{thm:andersontmf2} The Anderson dual of $ Tmf(2) $ is $ \Sigma^9Tmf(2) $. The inherited $ S_3 $-action on $ \pi_*\IZ Tmf(2) $ corresponds to the action on $ \pi_*\Sigma^9Tmf(2) $ up to a twist by sign. \end{thm} \begin{proof} On the level of homotopy groups, we have the spectral sequence \eqref{ss:AndersonDual} which collapses because each $ \pi_iTmf(2) $ is a free (and dualizable) $ \Z $-module. Thus \[ \pi_*\IZ Tmf(2)=(\Lambda\oplus\Sigma^{-9}\Lambda^\vee_{\sgn})^\vee,\] which is isomorphic to $ \Sigma^9(\pi_*Tmf(2))_{\sgn}=\Lambda^\vee\oplus\Sigma^9\Lambda_{\sgn} $, as $ \pi_*Tmf(2) $-modules via a double duality map. Now, $ \IZ Tmf(2) $, being defined as the function spectrum $ F(Tmf(2),\IZ) $, is naturally a $ Tmf(2) $-module, thus a dualizing class $f: S^9\to \IZ Tmf(2) $ extends to an equivalence $\tilde{f}: \Sigma^9Tmf(2)\to\IZ Tmf(2) $. Specifically, let $ f:S^9\to\IZ Tmf(2) $ represent a generator of $ \pi_9\IZ Tmf(2)=\Z[1/2] $, which is also a generator of $ \pi_*\IZ Tmf(2) $ as a $ \pi_*Tmf(2) $-module. Then the composition \[ \tilde{f}:S^9\wedge Tmf(2) \xrightarrow{f\wedge \Id_{Tmf(2)}} \IZ Tmf(2)\wedge Tmf(2)\xrightarrow{\psi} \IZ Tmf(2), \] where $ \psi $ is the $ Tmf(2) $-action map, gives the required equivalence. Namely, let $ a $ be an element of $ \pi_*Tmf(2) $; then $ \tilde{f}_*(\Sigma^9 a) =[f]a$, but $ f $ was chosen so that its homotopy class generates $ \pi_*\IZ Tmf(2) $ as a $ \pi_*Tmf(2) $-module. \end{proof} \section{Group Cohomology Computations}\label{sec:grpcohomology} This section is purely technical; for further use, we compute the $S_3$ homology and cohomology of the module $H^*(\Ml{2},\om{\Ml{2}}{*})=\Lambda\oplus\Sigma^{-9}\Lambda^\vee_{\sgn}$, where the action is described in Lemma \ref{ActionLambda}. First we deal with Tate cohomology, and then we proceed to compute the invariants and coinvariants. \subsection{Tate Cohomology}\label{sec:tatecohomology} The symmetric group on three letters fits in a short exact sequence \[ 1\to C_3\to S_3\to C_2\to 1, \] producing a Lyndon-Hochschield-Serre spectral sequence for the cohomology of $ S_3 $. If $ 2 $ is invertible in the $ S_3 $-module $ M $, the spectral sequence collapses to give that the $ S_3 $ cohomology, as well as the $ S_3 $-Tate cohomology, is computed as the fixed points of the $ C_3 $-analogue \begin{align*} H^*(C_3,M)^{C_2}\cong H^*(S_3,M)\\ \tH^*(C_3,M)^{C_2}\cong \tH^*(S_3,M). \end{align*} Therefore, it suffices to compute the respective $C_3$-cohomology groups as $C_2$-modules. To do this, we proceed as in \cite{\GHMR}. Give $A=\Z[1/2][x_0,x_1,x_2]$ the left $S_3$-action as follows: $g\in S_3$ maps $x_i$ to $(-1)^{\sgn g}x_{gi}$. We have a surjection of $S_3$-modules $A\to \Lambda$ given by \begin{align*} x_0&\mapsto\lambda_1\\ \sigma(x_0)=x_1&\mapsto\lambda_2-\lambda_1=\sigma(\lambda_1)\\ \sigma^2(x_0)=x_2&\mapsto-\lambda_2=\sigma^2(\lambda_2). \end{align*} The kernel of this map is the ideal generated by $\sigma_1=x_0+x_1+x_2$. Therefore, we have a short exact sequence \begin{equation}\label{AtoLambda} 0\to A\sigma_1\to A\to \Lambda\to 0. \end{equation} The orbit under $\sigma$ of each monomial of $A$ has $3$ elements, unless that polynomial is a power of $\sigma_3=x_0x_1x_2$. Therefore, $A$ splits as a sum of a $S_3$-module $ F $ with free $C_3$-action and $\Z[\sigma_3]$ which has trivial $C_3$-action, i.e. \begin{equation}\label{Adecomposition} A=F\oplus\Z[\sigma_3]. \end{equation} Let $ N:A\to H^0(C_3,A) $ be the additive norm map, and let $d$ denote the cohomology class in bidegree $ (0,6) $ represented by $\sigma_3$. Then we have an exact sequence \[ A\xrightarrow{N} H^*(C_3,A)\to \Z/3[b,d]\to 0, \] where $ b $ is a cohomology class of bidegree $ (2,0) $. The Tate cohomology of $ A $ is then \[ \tH^*(C_3,A)\cong H^*(C_3,A)[b^{-1}] \stackrel{\sim}{\rightarrow}\Z/3[b^{\pm 1},d]. \] The quotient $C_2$-action is given by $\tau(b)=-b$ and $\tau(d)=-d$. Similarly, noting that the degree of $\sigma_1$ is $2$, and $\tau(\sigma_1)=-\sigma_1$, we obtain that the $ C_3 $-cohomology of the module $ A\sigma_1 $ is the same as that of $ A $, with the internal grading shifted by $ 2 $, and the quotient $ C_2 $-action twisted by sign. In other words, \[ \tH^*(C_3,A\sigma_1)\cong \Sigma^2\left( (\Z_{\sgn}/3)[\tilde b^{\pm 1},\tilde d]\right). \] where again $\tilde b$ and $\tilde d$ have bidegrees $(2,0)$ and $(0,6)$ respectively, and the quotient $C_2$-action is described by \[ \tau:\quad\tilde b^i\tilde d^j\mapsto (-1)^{i+j+1}\tilde b^i\tilde d^j. \] Note that $\tH^*(C_3,A)$ and $\tH^*(C_3,A\sigma_1)$ are concentrated in even cohomological degrees. Therefore, the long exact sequence in cohomology induced by \eqref{AtoLambda} breaks up into the exact sequences \begin{align} 0\to \tH^{2k-1}(C_3,\Lambda)\to \tH^{2k}(C_3,A\sigma_1)\to \tH^{2k}(C_3,A)\to \tH^{2k}(C_3,\Lambda)\to 0. \end{align} The middle map in this exact sequence is zero, because it is induced by multiplication by $ \sigma_1 $, which is in the image of the additive norm on $A$. It follows that \[ \tH^*(C_3,\Lambda)\cong\Z/3[a,b^{\pm 1},d]/(a^2), \] where $ a $ is the element in bidegree $ (1,2) $ which maps to $ \tilde{b}\in \tH^2(C_3,A\sigma_1) $. The quotient action by $C_2$ is described as \begin{align*} \tau:\quad &a\mapsto a\\ &b\mapsto -b\\ &d\mapsto -d. \end{align*} Now it only remains to take fixed points to compute the Tate cohomology of $ \Lambda $ and $ \Lambda_{\sgn} $. \begin{proposition}\label{prop:TateCohomology} Denote by $ R $ the graded ring $ \Z/3[a,b^{\pm 2},d^2]/(a^2) $. Then Tate cohomology of the $ S_3 $-modules $ \Lambda $ and $ \Lambda_{\sgn} $ is \begin{align*} \tH^*(S_3,\Lambda)&=R\oplus Rbd,\\ \tH^*(S_3,\Lambda_{\sgn})&=Rb\oplus Rd. \end{align*} \end{proposition} \begin{rem} The classes $a$ and $bd$ will represent the elements of $ \pi_*Tmf $ commonly known as $\alpha$ and $\beta$, respectively, at least up to a unit. \end{rem} \subsection{Invariants}\label{sec:invariants} We now proceed to compute the invariants $ H^0(S_3,H^*(\Ml{2}, \om{ \Ml{2}} {*} ) )$. The result is summarized in the next proposition. \begin{proposition}\label{prop:invariants} The invariants of $ \Lambda $ under the $ S_3 $-action are isomorphic to the ring of modular forms $ MF_*[1/2] $, i.e. \[ \Lambda^{S_3}= \Z[1/2][c_4,c_6,\Delta]/(1728\Delta-c_4^3-c_6^2). \] The twisted invariants module $ \Lambda_{\sgn}^{S_3} $ is a free $ \Lambda^{S_3} $-module on a generator $ d $ of degree $ 6 $. \end{proposition} \begin{proof} Let $\varepsilon\in A$ denote the alternating polynomial $(x_1-x_2)(x_1-x_3)(x_2-x_3)$. Then $\varepsilon^2$ is symmetric, so it must be a polynomial $ g(\sigma_1,\sigma_2,\sigma_3) $ in the elementary symmetric polynomials. Indeed, $ g $ is the discriminant of the polynomial \[(x-x_0)(x-x_1)(x-x_2)=x^3+\sigma_1x^2+\sigma_2x+\sigma_3.\quad \eqref{Ex}\] The $ C_3 $ invariants in $ A $ are the alternating polynomials in three variables \[ A^{C_3}=\Z[\sigma_1,\sigma_2,\sigma_3,\varepsilon]/(\varepsilon^2-g). \] The quotient action by $ C_2 $ fixes $\sigma_2$ and $\varepsilon$, and changes the sign of $\sigma_1$ and $\sigma_3$. Since $ C_3 $ fixes $ \sigma_1 $, the invariants in the ideal in $ A $ generated by $ \sigma_1 $ are the ideal generated by $ \sigma_1 $ in the invariants $ A^{C_3} $. As $ H^1(C_3,A\sigma_1) =0$, the long exact sequence in cohomology gives a short exact sequence of invariants \[ \sigma_1 A^{C_3}\to A^{C_3}\to\Lambda^{C_3}\to 0. \] Denoting by $ p $ the quotient map $ A^{C_3}\to\Lambda^{C_3} $, we now have \[ \Lambda^{C_3}\cong A^{C_3}/(\sigma_1)=\Z[p(\sigma_2),p(\sigma_3),p(\varepsilon)]/p(\varepsilon^2+27\sigma_3^2+4\sigma_2^3) \] where $\tau$ fixes $p(\sigma_2)$ and $p(\varepsilon)$ and changes the sign of $p(\sigma_3)$. It is consistent with the above computations of Tate cohomology to denote $ \sigma_3 $ and $ p(\sigma_3) $ by $ d $. The invariant quantities are well-known; they are the modular forms of $E_\lambda$ of \eqref{Elambda}, the universal elliptic curve over $\Ml{2}$: \begin{align}\label{eq:lambdamodular} \begin{split} p(\sigma_2)=&-(\lambda_1^2+\lambda_2^2-\lambda_1\lambda_2)=-\frac{1}{16}c_4\\ p(\varepsilon)=&-(\lambda_1+\lambda_2)(2\lambda_1^2+2\lambda_2^2-5\lambda_1\lambda_2)=\frac{1}{32}c_6\\ p(\sigma_3^2)=&d^2=\lambda_1^2\lambda_2^2(\lambda_2-\lambda_1)^2=\frac{1}{16}\Delta. \end{split} \end{align} Hence, $ d $ is a square root of the discriminant $ \Delta $, and since $ 2 $ is invertible, we get that the invariants \begin{align}\label{eq:invariants} \begin{split} \Lambda^{S_3}&=\Z[1/2][p(\sigma_2),p(\sigma_3^2),p(\varepsilon)]/p(\varepsilon^2+27\sigma_3^2+4\sigma_2^3)\\ &=\Z[1/2][c_4,c_6,\Delta]/(1728\Delta-c_4^3+c_6^2)=MF_* \end{split} \end{align} are the ring of modular forms, as expected. Moreover, there is a splitting $ \Lambda^{C_3}\cong \Lambda^{S_3}\oplus d\Lambda^{S_3}$, giving that \begin{align}\label{inv:splitting} \Lambda_{\sgn}^{S_3}=d\Lambda^{S_3}. \end{align} \end{proof} \subsection{Coinvariants and Dual Invariants}\label{sec:coinvariants} To be able to use Theorem \eqref{GSDuality} to compute homotopy groups, we also need to know the $ S_3 $-cohomology of the signed dual of $ \Lambda $. For this, we can use the composite functor spectral sequence for the functors $ \Hom_{\Z}(-,\Z) $ and $ \Z\tensor{\Z S_3}(-) $. Since $ \Lambda $ is free over $ \Z $, we get that \[ \Hom_{\Z}(\Z\tensor{\Z S_3}\Lambda_{\sgn},\Z)\cong \Hom_{\Z S_3}(\Z,\Lambda^\vee_{\sgn}),\]and a spectral sequence \begin{align}\label{ss:dualcohomology} \Ext^p_{\Z}(H_q(S_3,\Lambda_{\sgn}),\Z)\Rightarrow H^{p+q}(S_3,\Lambda_{\sgn}^\vee). \end{align} The input for this spectral sequence is computed in the following lemma. \begin{lemma}\label{lemma:coinvariants} The coinvariants of $ \Lambda $ and $ \Lambda_{\sgn} $ under the $ S_3$ action are \begin{align*} H_0(S_3,\Lambda)=(3,c_4,c_6)\oplus ab^{-1}d \Z/3[\Delta]\\ H_0(S_3,\Lambda_{\sgn})=d(3,c_4,c_6)\oplus ab^{-1} \Z/3[\Delta], \end{align*} where $ (3,c_4,c_6) $ is the ideal of the ring $ \Lambda^{S3}=MF_* $ of modular forms generated by $ 3,c_4 $ and $ c_6 $, and $ d(3,c_4,c_6) $ is the corresponding submodule of the free $ \Lambda^{S_3}$-module generated by $ d $. \end{lemma} \begin{proof} We use the exact sequence \begin{align}\label{seq:normM} 0\to \tH^{-1}(S_3,M)\to H_0(S_3,M)\xrightarrow{N} H^0(S_3,M)\to \tH^0(S_3,M)\to 0. \end{align} For $ M=\Lambda $, this is \[ 0\to ab^{-1}d\Z/3[\Delta] \to H_0(S_3,\Lambda )\xrightarrow{N} \Z[c_4,c_6,\Delta]/(\sim) \xrightarrow{\pi} \Z/3[\Delta]\to 0, \] where the rightmost map $ \pi $ sends $ c_4 $ and $ c_6 $ to zero, and $ \Delta $ to $ \Delta $. Hence its kernel is the ideal $ (3,c_4,c_6) $, which is a free $ \Z $-module so we have a splitting as claimed. Similarly, for $ M=\Lambda_{\sgn} $, the exact sequence \eqref{seq:normM} becomes \[ 0\to ab^{-1}\Z/3[\Delta] \to H_0(S_3,\Lambda_{\sgn})\xrightarrow{N} d\Z[c_4,c_6,\Delta]/(\sim) \xrightarrow{d\pi} d\Z/3[\Delta]\to 0. \] The kernel of $ d\pi $ is the ideal $ d(3,c_4,c_6) $, and the result follows. \end{proof} \begin{cor}\label{prop:dualinvariants} The $ S_3 $-invariants of the dual module $ \Lambda^\vee $ are the module dual to the ideal $ (3,c_4, c_6) $, and the $ S_3 $-invariants of the dual module $ \Lambda_{\sgn}^\vee $ are the module dual to the ideal $ d(3,c_4,c_6) $. \end{cor} \begin{proof} In view of the above spectral sequence \eqref{ss:dualcohomology}, to compute the invariants it suffices to compute the coinvariants, which we just did in Lemma \ref{lemma:coinvariants}, and dualize. \end{proof} We need one more computational result crucial in the proof of the main Theorem \ref{MainThm}. \begin{proposition}\label{prop:e2shift} There is an isomorphism of modules over the cohomology ring $H^{*}(S_{3}, \pi_{*}Tmf(2))$ \[ H^*(S_3,\pi_*\IZ Tmf(2))\cong H^*(S_3,\pi_*\Sigma^{21}Tmf(2)). \] \end{proposition} \begin{proof} We need to show that $ H^*(S_3,\Sigma^9\Lambda_{\sgn}\oplus\Lambda^\vee )$ is a shift by $ 12 $ of $ H^*(S_3,\Sigma^9\Lambda\oplus\Lambda^\vee_{\sgn}) $. First of all, we look at the non-torsion elements. Putting together the results from equation \eqref{inv:splitting} and Corollary \ref{prop:dualinvariants} yields \[ H^0(S_3,\Sigma^9\Lambda_{\sgn}\oplus\Lambda^\vee)= d H^0(S_3,\Sigma^9\Lambda\oplus \Lambda_{\sgn}^\vee), \] and indeed we shall find that the shift in higher cohomology also comes from multiplication by the element $ d $ (of topological grading $ 12 $). Now we look at the higher cohomology groups, computed in Proposition \ref{prop:TateCohomology}. Identifying $ (b^{-1})^\vee $ with $ b $, we obtain, in positive cohomological grading \begin{align*} H^*(S_3,\pi_*\IZ Tmf(2))=\Sigma^9 H^*(S_3,\Lambda_{\sgn})\oplus H^*(\Lambda^\vee)\\ = \Z/3[b^2,\Delta]\langle \Sigma^9b,\Sigma^9ab,\Sigma^9b^2d,\Sigma^9ad \rangle\\ \oplus \Z/3[b^2,\Delta]/(\Delta^\infty)\langle b^2\Delta, a^\vee b^2\Delta, bd, a^\vee bd \rangle, \end{align*} which we are comparing to \begin{align*} \Sigma^{21} H^*(S_3,\pi_* Tmf(2))= \Sigma^9d H^*(S_3,\Lambda)\oplus d H^*(S_3,\Lambda^\vee_{\sgn})\\ = \Z/3[b^2,\Delta]\langle \Sigma^9b^2d,\Sigma^9ad,\Sigma^9b\Delta,\Sigma^9 ab\Delta \rangle\\ \oplus \Z/3[b^2,\Delta]/(\Delta^\infty)\langle bd\Delta,a^\vee bd\Delta, b^2\Delta, a^\vee b^2\Delta \rangle. \end{align*} Everything is straightforwardly identical, except for the match for the generators $ \Sigma^9b,\Sigma^9ab \in \Sigma^9 H^*(S_3,\Lambda_{\sgn}) $ which have cohomological gradings $ 2 $ and $ 3 $, and topological gradings $ 7 $ and $ 10 $ respectively. On the other side of the equation we have generators $ a^\vee bd, bd \in H^*(S_3,\Lambda^\vee)=\Sigma^9 H^*(S_3,H^1(\Ml{2},\om{}{*})) $, whose cohomological gradings are $ 2 $ and $ 3 $, and topological $ 7 $ and $ 10 $ respectively. Identifying these elements gives an isomorphism which is compatible with multiplication by $ a,b,d $. \end{proof} \subsection{Localization} We record the behavior of our group cohomology rings when we invert a modular form; in Section \ref{sec:htpyfixedpoints} we will be inverting $ c_4 $ and $ \Delta $. \begin{proposition} Let $ M $ be one of the modules $ \Lambda $, $ \Lambda_{\sgn} $; the ring of modular forms $ MF_*=\Lambda^{S_3} $ acts on $ M $. Let $ m\in MF_* $, and let $ M[m^{-1}] $ be the module obtained from $ M $ by inverting the action of $ m $. Then \[ H^*(S_3,M[m^{-1}])\cong H^*(S_3,M)[m^{-1}]. \] \end{proposition} \begin{proof} Since the ring of modular form is $ S_3 $-invariant, the group $ S_3 $ acts $ MF_* $-linearly on $ M $; in fact, $ S_3 $ acts $ \Z[m^{\pm 1}] $-linearly on $ M $, where $ m $ is our chosen modular form. By Exercise 6.1.2 and Proposition 3.3.10 in \cite{\Weibel}, it follows that \begin{align*} H^*(S_3,M[m^{-1}])&=\Ext^*_{Z[m^{\pm 1}][S_3]}(\Z[m^{\pm 1}], M[m^{-1}])\\ &=\Ext^*_{\Z[m][S_3]}(\Z[m],M)[m^{-1}]=H^*(S_3,M)[m^{-1}]. \end{align*} \end{proof} Note that if $ M $ is one of the dual modules $ \Lambda^\vee $ or $ \Lambda_{\sgn}^\vee $, the elements of positive degree (i.e. non-scalar elements) in the ring of modular forms $ MF_* $ act on $ M $ nilpotently. Therefore $ M[m^{-1}]=0 $ for such an $ m $. Moreover, for degree reasons, $c_4 a =0=c_4 b$, and we obtain the following result. \begin{proposition}\label{prop:c4action} The higher group cohomology of $ S_3 $ with coefficients in $ \pi_*Tmf(2)[c_4^{-1}] $ vanishes, and \begin{align*} &H^*(S_3,\pi_*Tmf(2)[c_4^{-1}]) = H^0(S_3,\Lambda)[c_4^{-1}]= MF_*[c_4^{-1}]. \end{align*} Inverting $ \Delta $ has the effect of annihilating the cohomology that comes from the negative homotopy groups of $ Tmf(2) $; in other words, \begin{align*} &H^*(S_3,\pi_*TMF(2))=H^*(S_3,\pi_*Tmf(2)[\Delta^{-1}])= H^*(S_3,\Lambda)[\Delta^{-1}]. \end{align*} \end{proposition} \section{Homotopy Fixed Points} In this section we will use the map $ q:\Ml{2}\to\M[1/2] $ and our knowledge about $ Tmf(2) $ from the previous sections to compute the homotopy groups of $ Tmf $, in a way that displays the self-duality we are looking for. Economizing the notation, we will write $ \M $ to mean $ \M[1/2] $ throughout. \subsection{Homotopy Fixed Point Spectral Sequence}\label{sec:htpyfixedpoints} We will use Theorem \ref{thm:tmffixedpt} to compute the homotopy groups of $ Tmf $ via the homotopy fixed point spectral sequence \begin{align}\label{ss:htpyfixed} H^*(S_3,\pi_*Tmf(2))\Rightarrow \pi_*Tmf. \end{align} We will employ two methods of calculating the $ E_2 $-term of this spectral sequence: the first one is more conducive to computing the differentials, and the second is more conducive to understanding the duality pairing. \subsubsection*{Method One} The moduli stack $ \M $ has an open cover by the substacks $ \M^0=\M[\Delta^{-1}] $ and $ \M[c_4^{-1}] $, giving the cube of pullbacks \[ \xymatrix@C=3pt@R=6pt{ & \M^0[c_4^{-1}] \ar[rr]\ar[dd]& &\M^0\ar[dd]\\ \Ml{2}^0[c_4^{-1}] \ar[ru]\ar[rr]\ar[dd] && \Ml{2}^0\ar[ru]\ar[dd]\\ &\M[c_4^{-1}]\ar[rr] &&\M.\\ \Ml{2}[c_4^{-1}]\ar[ru]\ar[rr] && \Ml{2}\ar[ru] } \] Since $ \Delta,c_4 $ are $ S_3 $-invariant elements of $ H^*(\Ml{2},\omega^*) $, the maps in this diagram are compatible with the $ S_3 $-action. Taking global sections of the front square, we obtain a cofiber sequence \[ Tmf(2) \to TMF(2)\vee Tmf(2)[c_4^{-1}] \to TMF(2)[c_4^{-1}],\] compatible with the $ S_3 $-action. Consequently, there is a cofiber sequence of the associated homotopy fixed point spectral sequences, converging to the cofiber sequence from the rear pullback square of the above diagram \begin{align}\label{cof:tmfc4del} Tmf \to TMF\vee Tmf[c_4^{-1}] \to TMF[c_4^{-1}]. \end{align} We would like to deduce information about the differentials of the spectral sequence for $ Tmf $ from the others. According to Proposition \ref{prop:c4action}, we know that the spectral sequences for $ Tmf[c_4^{-1}] $ and $ TMF[c_4^{-1}] $ collapse at their $ E_2 $ pages and that all torsion elements come from $ TMF $. From Proposition \ref{prop:TateCohomology}, we know what they are. As the differentials in the spectral sequence \eqref{ss:htpyfixed} involve the torsion elements in the higher cohomology groups $ H^*(S_3,\pi_*Tmf(2)) $, they have to come from the spectral sequence for $ TMF $\footnote{Explicitly, the cofiber sequence \eqref{cof:tmfc4del} gives a commutative square of spectral sequences just as in the proof of Theorem \ref{MainThm} below, which allows for comparing differentials.}, where they are (by now) classical. They are determined by the following lemma, which we get from \cite{\Bauer} or \cite{\Rezk512}. \begin{lemma} The elements $ \alpha $ and $ \beta $ in $ \pi_*S_{(3)} $ are mapped to $ a $ and $ bd $ respectively under the Hurewicz map $ \pi_*S_{(3)}\to\pi_*TMF $. \end{lemma} Hence, $ d_5(\Delta)=\alpha\beta^2 $, $ d_9(\alpha\Delta) =\beta^5$, and the rest of the pattern follows by multiplicativity. The aggregate result is depicted in the chart of Figure \ref{fig:HtpyFixed}. \begin{figure}[p] \centering \includegraphics[angle=90, height=.9\textheight]{FigHtpyFixed} \caption{Homotopy fixed point spectral sequence \eqref{ss:htpyfixed} for $ \pi_* Tmf $}\label{fig:HtpyFixed} \end{figure} \subsubsection*{Method Two} On the other hand, we could proceed using our Serre duality for $ \Ml{2} $ and the spectral sequence \eqref{ss:dualcohomology}. The purpose is to describe the elements below the line $ t=0 $ as elements of $ H^*(S_3,\Lambda_{\sgn}^\vee) $. Indeed, according to the Serre duality pairing from Theorem \ref{GSDuality}, we have an isomorphism \[ H^s(S_3,H^1(\Ml{2},\om{}{t-4}))\cong H^s(S_3,\Lambda_{\sgn}^\vee), \] and the latter is computed in Corollary \ref{prop:dualinvariants} via the collapsing spectral sequence \eqref{ss:dualcohomology} and Lemma \ref{lemma:coinvariants} \[ \Ext^p_{\Z}(H_q(S_3,\Lambda_{\sgn}),\Z)\Rightarrow H^{p+q}(S_3,\Lambda_{\sgn}^\vee). \] In Section \ref{sec:grpcohomology} we computed the input for this spectral sequence. The coinvariants are given as \[ H_0(S_3,\Lambda_{\sgn})=\Z/3[\Delta]ab^{-1}\oplus d(3,c_4,c_6) \] by Lemma \ref{lemma:coinvariants}, and the remaining homology groups are computed via the Tate cohomology groups. Namely, for $ q\geq 1 $, \[ H_q(S_3,\Lambda_{\sgn})\cong \tH^{-q-1}(S_3,\Lambda_{\sgn}) \] which, according to Proposition \ref{prop:TateCohomology}, equals to the part of cohomological degree $ (-q-1) $ in the Tate cohomology of $ \Lambda_{\sgn} $, which is $ Rb\oplus Rd $, where $ R=\Z/3 [a, b^{\pm 2}, \Delta]/ (a^2) $. Recall, the cohomological grading of $ a $ is one, that of $ b $ is two, and $ \Delta $ has cohomological grading zero. In particular, we find that the invariants $ H^0(S_3,\Lambda_{\sgn}^\vee) $ are the module dual to the ideal $ d(3,c_4,c_6) $. This describes the negatively graded non-torsion elements. For example, the element dual to $ 3d $ is in bidegree $ (t,s)=(-10,1) $. We can similarly describe the torsion elements as duals. If $ X $ is a torsion abelian group, let $ X^\vee $ denote $ \Ext^1_{\Z}(X,\Z) $, and for $ x\in X $, let $ x^\vee $ denote the element in $ X^\vee $ corresponding to $ x $ under an isomorphism $ X\cong X^\vee $. For example, $ (ab^{-1})^\vee $ is in bidegree $ (-6,2) $, and the element corresponding to $ b^{-2}d $ lies in bidegree $ (-10,5) $. \subsection{Duality Pairing} Consider the non-torsion part of the spectral sequence \eqref{ss:htpyfixed} for $\pi_*Tmf $. According to Lemma \ref{lemma:coinvariants}, on the $ E_2 $ page, it is $ MF_*\oplus \Sigma^{-9}d^\vee(3,c_4,c_6)^\vee $. Applying the differentials only changes the coefficients of various powers of $ \Delta $. Namely, the only differentials supported on the zero-line are $ d_5(\Delta^{m}) $ for non-negative integers $ m $ not divisible by $ 3 $, which hit a corresponding class of order $ 3 $. Therefore, $ 3^\epsilon\Delta^{m} $ is a permanent cycle, where $ \epsilon $ is zero if $ m $ is divisible by $ 3 $ and one otherwise. In the negatively graded part, only $ (3\Delta^{3k}d)^\vee $, for non-negative $ k $, support a differential $ d_9 $ and hit a class of order 3, thus $ (3^\epsilon\Delta^m d)^\vee $ are permanent cycles, for $ \epsilon $ as above. The pairing at $ E_\infty $ is thus obvious: $3^\epsilon\Delta^{m}c_4^ic_6^j$ and $ (3^\epsilon\Delta^m dc_4^ic_6^j)^\vee $ match up to the generator of $ \Z $ in $ \pi_{-21}Tmf $. The pairing on torsion depends even more on the homotopy theory, and interestingly not only on the differentials, but also on the exotic multiplications by $ \alpha $. The non-negative graded part is $ \Z[\Delta^3] $ tensored with the pattern in Figure \ref{fig:TorPos}, whereas the negative graded part is $ (\Z[\Delta^3]/\Delta^\infty )\frac{d^\vee}{\lambda_1\lambda_2}$ tensored with the elements depicted in Figure \ref{fig:TorNeg}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figtorsionplus} \caption{Torsion in positive degrees}\label{fig:TorPos} \end{figure} Everything pairs to $ \alpha^\vee\beta^5(\Delta^2)^\vee $, which is $ d_9 (\frac{(3d)^\vee}{\lambda_1\lambda_2}) $, i.e. the image under $ d_9 $ of $ 1/3 $ of the dualizing class. Even though $ \alpha^\vee\beta^5(\Delta^2)^\vee $ is zero in the homotopy groups of $ Tmf $, the corresponding element in the homotopy groups of the $ K(2) $-local sphere is nontrivial \cite{\HKM}, thus it makes sense to talk about the pairing. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figtorsionminus} \caption{Torsion in negative degrees}\label{fig:TorNeg} \end{figure} \section{The Tate Spectrum} In this section we will relate the duality apparent in the homotopy groups of $ Tmf\simeq Tmf(2)^{hS_3}$ to the vanishing of the associated Tate spectrum. The objective is to establish the following: \begin{thm}\label{thm:norm} The norm map $ Tmf(2)_{hS_3}\to Tmf(2)^{hS_3} $ is an equivalence. \end{thm} A key role is played by the fact that $ S_3 $ has periodic cohomology. Generalized Tate cohomology was first introduced by Adem-Cohen-Dwyer in \cite{MR1022669}. However, it was Greenlees and May who generalized and improved the theory, and, more importantly, developed excellent computational tools in \cite{\GreenleesMay}. In this section, we shall summarize the relevant results from \cite{\GreenleesMay} and apply them to the problem at hand. Suppose $ k $ is a spectrum with an action by a finite group $ G $; in the terminology of equivariant homotopy theory, this is known as a \emph{naive} $ G $-spectrum. There is a norm map \cite[5.3]{\GreenleesMay}, \cite[II.7.1]{\LewisMaySteinberger} from the homotopy orbit spectrum $ k_{hG} $ to the homotopy fixed point spectrum $ k^{hG} $ whose cofiber we shall call the \emph{Tate spectrum} associated to the $ G $-spectrum $ k $, and for simplicity denote it by $ k^{tG} $ \begin{align}\label{seq:normfixed} k_{hG}\to k^{hG}\to k^{tG}. \end{align} According to \cite[Proposition 3.5]{\GreenleesMay}, if $ k $ is a ring spectrum, then so are the associated homotopy fixed point and Tate spectra, and the map between them is a ring map. We can compute the homotopy groups of each of the three spectra in \eqref{seq:normfixed} using the Atiyah-Hirzebruch-type spectral sequences $ \check{E_*}, E_*,\hat{E}_* $ \cite[Theorems 10.3, 10.5, 10.6]{\GreenleesMay} \begin{align*} \check{E}_2^{p,q}=H_{-p}(G,\pi_q k) &\Rightarrow \pi_{q-p} k_{hG}\\ E_2^{p,q}=H^p(G, \pi_q k) &\Rightarrow \pi_{q-p} k^{hG}\\ \hat{E}_2^{p,q}=\tH^p(G, \pi_q k) &\Rightarrow \pi_{q-p} k^{tG}, \end{align*} which are conditionally convergent. As these spectral sequences can be constructed by filtering $ EG $, the first two are in fact the homotopy fixed point and homotopy orbit spectral sequences. In the case when $ k=Tmf(2) $, the first two are half-plane spectral sequences, whereas the third one is in fact a full plane spectral sequence. Moreover, the last two are spectral sequences of differential algebras. The norm cofibration sequence \eqref{seq:normfixed} relates these three spectral sequences by giving rise to maps between them, which on the $ E_2 $-terms are precisely the standard long exact sequence in group cohomology: \begin{align}\label{les:tate} \cdots \to H_{-p}(G,M)\to H^p(G,M)\to \tH^p(G,M)\to H_{-p-1}(G,M)\to\cdots \end{align} The map of spectral sequences $ E_*\to \hat{E}_* $ is compatible with the differential algebra structure, which will be important in our calculations as it will allow us to determine the differentials in the Tate spectral sequence (and then further in the homotopy orbit spectral sequence). \begin{figure}[p] \centering \includegraphics[height=.9\textheight]{figtate.eps} \caption{Tate spectral sequence \eqref{ss:tate} for $ \pi_* Tmf(2)^{tS_3} $}\label{fig:tate} \end{figure} \begin{proof}[Proof of Theorem \ref{thm:norm} ] We prove that the norm is an equivalence by showing that the associated Tate spectrum is contractible, using the above Tate spectral sequence for the case of $ k=Tmf(2) $ and $ G=S_3 $ \begin{align}\label{ss:tate} \tH^p (S_3,\pi_{2t-q} Tmf(2)) = \tH^p(S_3,H^q(\Ml{2},\om{}{t}))\Rightarrow \pi_{2t-p-q} Tmf(2)^{tS_3}. \end{align} The $ E_2 $-page is the Tate cohomology \[ \tH^*(S_3,\Lambda\oplus \Sigma^{-9}\Lambda_{\sgn}^\vee), \] which we computed in Proposition \ref{prop:TateCohomology} to be \[\displaystyle{ R\oplus Rbd\oplus \frac{\eta d^\vee}{\lambda_1\lambda_2} R^\vee \oplus \frac{\eta b^\vee}{\lambda_1\lambda_2} R^\vee}, \] where $ R=\Z/3[a,b^{\pm 2},\Delta]/(a^2) $, and $ R^\vee=\Ext^1_{\Z}(R,\Z) $. Comparing the two methods for computing the $ E_2 $-page of the homotopy fixed point spectral sequence \eqref{ss:htpyfixed} identifies $ \frac{\eta d^\vee}{\lambda_1\lambda_2} $ with $ \frac{\alpha}{\Delta} $ and $ \frac{\eta b^\vee}{\lambda_1\lambda_2} $ with $ \frac{\alpha}{\beta} $. Further, we can identify $ b^\vee $ with $ b^{-1} $ and similarly for $ \Delta $, as it does not change the ring structure, and does not cause ambiguity. We obtain \begin{align*} &\frac{\eta d^\vee}{\lambda_1\lambda_2} R^\vee =\frac{\alpha}{\Delta} R^\vee =\frac{1}{\Delta}\Z/3[a,b^{\pm 2},\Delta^{-1}]/(a^2)\\ &\frac{\eta b^\vee}{\lambda_1\lambda_2}R^\vee = \frac{\alpha}{\beta}R^\vee = \frac{b}{d}\Z/3[a,b^{\pm 2},\Delta^{-1}]/(a^2)=\frac{\beta}{\Delta}\Z/3[a,b^{\pm 2},\Delta^{-1}]/(a^2). \end{align*} Summing all up, we get the $ E_2 $-page of the Tate spectral sequence \begin{align}\label{E2tate} \tH^*(S_3,\pi_*Tmf(2)) = \Z/3[\alpha,\beta^{\pm 1},\Delta^{\pm 1}]/(\alpha^2), \end{align} depicted in Figure \ref{fig:tate}. The compatibility with the homotopy fixed point spectral sequence implies that $ d_5(\Delta)=\alpha\beta^2 $ and $ d_9(\alpha\Delta^2)=\beta^5 $; by multiplicativity, we obtain a differential pattern as showed below in Figure \ref{fig:tate}. From the chart we see that the tenth page of the spectral sequence is zero, and, as this was a conditionally convergent spectral sequence, it follows that it is strongly convergent, thus the Tate spectrum $ Tmf(2)^{tS_3} $ is contractible. \end{proof} \subsection{Homotopy Orbits} \begin{figure}[p] \centering \includegraphics[angle=90,height=.9\textheight]{FigHtpyOrbit} \caption{Homotopy orbit spectral sequence \eqref{ss:htpyorbit} for $ \pi_* Tmf $}\label{fig:htpyorbit} \end{figure} As a corollary of the vanishing of the Tate spectrum $ Tmf(2)^{tS_3} $, we fully describe the homotopy orbit spectral sequence \begin{align}\label{ss:htpyorbit} H_{s}(S_3,\pi_t Tmf(2))\Rightarrow \pi_{t+s}Tmf(2)_{hS_3}=\pi_{t+s}Tmf. \end{align} From \eqref{E2tate}, we obtain the higher homology groups as well as the differentials. If $ S $ denotes the ring $\Z/3[\beta^{-1},\Delta^{\pm 1}] $, \[ \bigoplus_{s>0}H_s(S_3,\pi_t Tmf(2))= \Sigma^{-1}( \beta^{-1} S\oplus \alpha\beta^{-2} S).\] (The suspension shift is a consequence of the fact that the isomorphism comes from the coboundary map $ \tH^*\to H_{-*-1} $.) The coinvariants are computed in Lemma \ref{lemma:coinvariants}. The spectral sequence is illustrated in Figure \ref{fig:htpyorbit}, with the the topological grading on the horizontal axis and for consistency, the cohomological on the vertical axis. \section{Duality for $ Tmf $} In this section we finally combine the above results to arrive at self-duality for $ Tmf $. The major ingredient in the proof is Theorem \ref{thm:norm}, which gives an isomorphism between the values of a right adjoint (homotopy fixed points) and a left adjoint (homotopy orbits). This situation often leads to a Grothendieck-Serre-type duality, which in reality is a statement that a functor (derived global sections) which naturally has a left adjoint (pullback) also has a right adjoint \cite{\FauskHuMay}. Consider the following chain of equivalences involving the Anderson dual of $ Tmf $ \begin{align*} \IZ Tmf &= F(Tmf,\IZ) \leftarrow F(Tmf(2)^{hS_3},\IZ) \to F(Tmf(2)_{hS_3},\IZ) \\ &\simeq F(Tmf(2),\IZ)^{hS_3}\simeq(\Sigma^9Tmf(2)_{\sgn})^{hS_3}, \end{align*} which implies a homotopy fixed point spectral sequence converging to the homotopy groups of $ \IZ Tmf $. From our calculations in Section \ref{sec:grpcohomology} made precise in Proposition \ref{prop:e2shift}, the $ E_2 $-term of this spectral sequence is isomorphic to the $ E_2 $-term for the homotopy fixed point spectral sequence for $ Tmf(2)^{hS_3} $, shifted by $ 21 $ to the right. A shift of $ 9 $ comes from the suspension, and an additional shift of $ 12 $ comes from the twist by sign (which is realized by multiplication by the element $ d $ whose topological degree is $ 12 $). It is now plausible that $ \IZ Tmf $ might be equivalent to $ \Sigma^{21}Tmf $; it only remains to verify that the differential pattern is as desired. To do this, we use methods similar to the comparison of spectral sequences in \cite{MR604321} and in the algebraic setting, \cite{\Deligne}: a commutative square of spectral sequences, some of which collapse, allowing for the tracking of differentials. \begin{thm}\label{MainThm} The Anderson dual of $ Tmf[1/2] $ is $ \Sigma^{21} Tmf[1/2]$. \end{thm} \begin{rem} Here again Anderson duals are taken in the category of spectra with $ 2 $ inverted, and $ 2 $ will implicitly be inverted everywhere in order for the presentation to be more compact. In particular, $ \Z $ will denote $ \Z[1/2] $, and $ \Q/\Z $ will denote $ \Q/\Z[1/2] $. \end{rem} \begin{proof} For brevity, let us introduce the following notation: for $ R $ any of $ \Z,\Q,\Q/\Z $, we let $ A^\bullet_{R} $ be the cosimplicial spectrum $ F(ES_{3+}\smsh{S_3} Tmf(2),\IR) $. In particular we have that \[ A^h_R=F\bigl( (S_3)_+^{\wedge (h+1)}\smsh{S_3}Tmf(2),\IR\bigr). \] Then the totalization $ \Tot A^\bullet_{\Z}\simeq \IZ Tmf $ is equivalent to the fiber of the natural map $ \Tot A^\bullet_{\Q} \to \Tot A^\bullet_{\Q/\Z} $. In other words, we are looking at the diagram \[\xymatrix{ A^0_{\Q}\ar@2[r]\ar[d] & A^1_{\Q}\ar@3[r]\ar[d] &A^2_{\Q}\ar[d] \cdots\\ A^0_{\Q/\Z}\ar@2[r] & A^1_{\Q/\Z}\ar@3[r] &A^2_{\Q/Z}\cdots\\ }\] and the fact that totalization commutes with taking fibers gives us two ways to compute the homotopy groups of $ \IZ Tmf $. Taking the fibers first gives rise to the homotopy fixed point spectral sequence whose differentials we are to determine: Each vertical diagram gives rise to an Anderson duality spectral sequence \eqref{ss:AndersonDual}, which collapses at $ E_2 $, as the homotopy groups of each $ \displaystyle{ \bigl((S_3)_+^{\wedge (h+1)}\smsh{S_3}Tmf(2)\bigr) }$ are free over $ \Z $. On the other hand, assembling the horizontal direction first gives a map of the $ \Q $ and $ \Q/\Z $-duals of the homotopy fixed point spectral sequence for the $ S_3$-action on $ Tmf(2) $; this is because $ \Q $ and $ \Q/\Z $ are injective $ \Z $-modules, thus dualizing is an exact functor. Let $ R^\bullet $ denote the standard injective resolution of $ \Z $, namely $ R^0=\Q $ and $ R^1=\Q/\Z $ related by the obvious quotient map. Then, schematically, we have a diagram of $ E_1 $-pages \[\xymatrix{ \Hom_{\Z}\big(\Z[S_3]^{\otimes(h+1)}\tensor{\Z[S_3]}\pi_t Tmf(2), R^v\big) \ar@2[d]_-*+[o][F-]{A} \ar@2[r]^-*+[o][F-]{B} &\Hom_{\Z}\big(\pi_{t+h}Tmf(2)_{hS_3},R^v\big)\ar@2[d]^-*+[o][F-]{D}\\ \Hom_{\Z}\big(\Z[S_3]^{\otimes(h+1)}\tensor{\Z[S_3]}\pi_t Tmf(2), \Z\big)\ar@2[r]^-*+[o][F-]{C} & \pi_{-t-h-v} \IZ Tmf(2)_{hS_3}. }\] The spectral sequence $ A $ collapses at $ E_2 $, and $ C $ is the homotopy fixed point spectral sequence that we are interested in: its $ E_2 $ page is the $ S_3 $-cohomology of the $ \Z $-duals of $ \pi_* Tmf(2) $, which are the homotopy groups of the Anderson dual of $ Tmf(2) $ \begin{align*} &H^* \Hom_{\Z}\big(\Z[S_3]^{\otimes(h+1)}\tensor{\Z[S_3]}\pi_t Tmf(2), \Z\big)\\ &\cong H^* \Hom_{\Z[S_3]}\big(\Z[S_3]^{\otimes(h+1)},\Hom_{\Z}(\pi_t Tmf(2),\Z)\big)\\ &\cong H^h(S_3,\pi_{-t}\IZ Tmf(2)). \end{align*} Indeed, the $ E_2 $-pages assemble in the following diagram \[\xymatrix{ \Ext^v_{\Z}\big(H_{h}(S_3,\pi_t Tmf(2)), \Z\big) \ar@2[d]_-*+[o][F-]{A} \ar@2[r]^-*+[o][F-]{B} &\Ext^v_{\Z}\big(\pi_{t+h}Tmf(2)_{hS_3},\Z\big)\ar@2[d]^-*+[o][F-]{D}\\ H^{h+v}(S_3,\pi_{-t}\IZ Tmf(2)) \ar@2[r]^-*+[o][F-]{C} & \pi_{-t-h-v} \IZ Tmf(2)_{hS_3}. }\] The spectral sequence $ A $ is the dual module group cohomology spectral sequence \eqref{ss:dualcohomology}, and it collapses, whereas $ D $ is the Anderson duality spectral sequence \eqref{ss:AndersonDual} which likewise collapses at $ E_2 $. The spectral sequence $ B $ is dual to the homotopy orbit spectral sequence $ \check E_* $ \eqref{ss:htpyorbit}, which we have completely described in Figure \ref{fig:htpyorbit}. Now \cite[Proposition 1.3.2]{\Deligne} tells us that the differentials in $ B $ are compatible with the filtration giving $ C $ if and only if $ A $ collapses, which holds in our case. Consequently, $ B $ and $ C $ are isomorphic. In conclusion, we read off the differentials from the homotopy orbit spectral sequence (Figure \ref{fig:htpyorbit}). There are only non-zero $ d_5 $ and $ d_9 $. For example, the generators in degrees $ (6,3) $ and $ (2,7) $ support a $ d_5 $ as the corresponding elements in \eqref{ss:htpyorbit} are hit by a differential $ d_5 $. Similarly, the generator in $ (1,72) $ supports a $ d_9 $, as it corresponds to an element hit by a $ d_9 $. There is no possibility for any other differentials, and the chart is isomorphic to a shift by $ 21 $ of the one in Figure \ref{fig:HtpyFixed}. By now we have an abstract isomorphism of the homotopy groups of $ \IZ Tmf $ and $ \Sigma^{21}Tmf $, as $ \pi_*Tmf $-modules. As in Theorem \ref{thm:andersontmf2}, we build a map realizing this isomorphism by specifying the dualizing class and then extending using the $ Tmf $-module structure on $ \IZ Tmf $. \end{proof} As a corollary, we recover \cite[Proposition 2.4.1]{\Behrens}. \begin{cor} At odd primes, the Gross-Hopkins dual of $ L_{K(2)}Tmf $ is $ \Sigma^{22}L_{K(2)}Tmf $. \end{cor} \begin{proof} The spectrum $ Tmf $ is $ E(2) $-local, hence we can compute the Gross-Hopkins dual $ I_2Tmf $ as $ \Sigma L_{K(2)}\IZ Tmf $ by \eqref{AndersonGrossHopkins}. \end{proof} \bibliographystyle{amsalpha} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
0.006861
Krispy Kreme Offers St. Patrick’s Day Doughnuts To go along with the new Chocolate Chip Cookie Dough Doughnut and new Java Chip Doughnut, Krispy Kreme has introduced a few other seasonal offerings for a limited time promotion. St. Patrick’s Day is Tuesday, March 17th and March Madness is right around the corner so Krispy Kreme is celebrating with the following doughnuts: - Shamrock Doughnut – A new hand-decorated Shamrock shaped doughnut, dipped in green icing and hand decorated white icing border. - Chocolate Iced St. Patrick’s Day Sprinkles – Our traditional chocolate iced ring topped with our NEW St. Patricks’s Day sprinkles. - Basketball Doughnut – A hand-decorated basketball shaped doughnut (shell) filled with Kreme Filling.
0.035342
Editor for this issue: Takako Matsui <tako linguistlist.org> Title: Qur'anic Stylistics Subtitle: A Linguistic Analysis Series Title: Languages of the World 32 Publication Year: 2003 Publisher: Lincom GmbH Author: Hussein Abdul-Raof Paperback: ISBN: 3895868175, Pages: 50, Price: EUR 26. Contents: Linguistic and Stylistic Expressions Introduction Chapter One Linguistic and Textual Features of Qur'anic Discourse 1.1 Introduction 1.2 Qur�Euro(tm)anic Linguistic Features 1.3 Qur� Lingfield(s): Language Description Subject Language(s): Quranic Arabic (Language Code: ARV) Written In: English (Language Code: ENG) See this book announcement on our website: to author|Respond to list|Read more issues|LINGUIST home page|Top of issue
0.009698
Some of the previous lessons of this course have mentioned "feeling the presence" of a person who is not physically with me. In other words, I have learned in some cases when a person is not around to feel the person's spirit. What happens if the person is physically with me and I feel the person's spirit anyway? We do this all the time with heart and soul opening partners, don't we? The energies of a heart and soul opening partner feel so familiar to us and so comfortable to us that without thinking we automatically begin feeling the heart and soul opening partner's spirit. With others whose energies are not so automatically pleasant to us, we can still with some slight additional focus feel the other's spirit. For example, let's suppose a man is slightly angry with me. My ego's normal reaction would be to close myself down in order to not feel the energy of his anger. But by closing myself down, I end up not feeling the positive aspects of his spirit either. What if I consciously said to my mind, "Instead of closing down, I'm going to feel his spirit totally," and I began to focus on feeling his spirit? Since I experience whatever I set my mind to experience, I would soon begin to feel his spirit, anger and all, wouldn't I? But the key word here is "all". His anger is very, very small compared to the fullness of his spirit, isn't it? When I feel all of the man's spirit, what happens to his anger? You guessed it. His anger begins to dissipate. The much larger part of his spirit, the part that dissipates his anger energy, is now being brought into his feeling awareness by virtue of my feeling it. In fact, this is technically known as a "miracle." He was angry. Now he is unable to hold onto his anger. Other examples? Maybe my ego would tell me about someone, "She is totally full of fear. Don't go near her." Yet by feeling her spirit in its fullness, her fear begins to turn into trust and confidence. Perhaps my ego would have me shun feeling a man's pain. But when I allow myself to feel his spirit his pain is soon replaced by peace and serenity. Although my ego might cringe at the idea of feeling a distraught man's death wish, by feeling the fullness of his true spirit his death wish becomes a wish for life. When a wish for death becomes a desire for life ... that's a miracle. So this week I make myself a miracle worker in yet another respect. I train myself to feel the spirit of other people in all kinds of situations. As I awaken each day this week, I thank my lucky stars that I am a miracle worker and I repeat a few times aloud a statement of this week's lesson: "All during the day today I practice feeling the spirit of others in all situations." Each hour of the day I take a 5 minute break to review my progress. During the hour just ended, did I take the time to feel the spiritual presence of those around me? In cases where I felt the presence of others, what were the results? Did people seem closer and more connected to me? Did I see any miracles occur, any changes of attitude or circumstances? During these 5 minute review periods I take a moment to re-dedicate myself to my practice for the coming hour. I might say in my mind a few times a statement like: "During this coming hour I REALLY feel the spirit of others in all cases." At the end of the day I prepare myself for sleep by taking 20 full minutes for meditation. During this meditation I don't attempt to feel anyone in particular, but allow myself to feel the spirit of the whole world. If I'm going to be a miracle worker, I might as well bring healing to the whole world, right? Before falling asleep I feel grateful towards myself. If I'm healing others with my meditations and my miracles, I am healing myself also ... and such a healing is worthy of gratitude. I might even say to myself a few times before falling to sleep: "If I feel the spirit of others TOTALLY ... I'm in bliss!" Also available free of charge online: Course in Political Miracles
0.496877
\begin{document} \title{Generalised convexity with respect to families of affine maps} \author{Zakhar Kabluchko} \address{Institute of Mathematical Stochastics, Department of Mathematics and Computer Science, University of M\"{u}nster, Orl\'{e}ans-Ring 10, D-48149 M\"{u}nster, Germany} \email{zakhar.kabluchko@uni-muenster.de} \author{Alexander Marynych} \address{Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, Kyiv, Ukraine} \email{marynych@unicyb.kiev.ua} \author{Ilya Molchanov} \address{Institute of Mathematical Statistics and Actuarial Science, University of Bern, Alpeneggstr. 22, 3012 Bern, Switzerland} \email{ilya.molchanov@stat.unibe.ch} \date{\today} \begin{abstract} The standard convex closed hull of a set is defined as the intersection of all images, under the action of a group of rigid motions, of a half-space containing the given set. In this paper we propose a generalisation of this classical notion, that we call a $(K,\HH)$-hull, and which is obtained from the above construction by replacing a half-space with some other convex closed subset $K$ of the Euclidean space, and a group of rigid motions by a subset $\HH$ of the group of invertible affine transformations. The main focus is put on the analysis of $(K,\HH)$-convex hulls of random samples from $K$. \end{abstract} \keywords{Convex body, generalised convexity, generalised curvature measure, matrix Lie group, Poisson hyperplane tessellation, zero cell} \subjclass[2010]{Primary: 60D05; secondary: 52A01, 52A22} \maketitle \section{Introduction} \label{sec:introduction} Let $\HH$ be a nonempty subset of the product $\R^d\times \gl$, where $\gl$ is a group of all invertible linear transformations of $\R^d$. We regard elements of $\HH$ as invertible affine transformations of $\R^d$ by identifying $(x,g)\in\HH$ with a mapping \begin{displaymath} \R^d \ni y\mapsto g(y+x)\in\R^d, \end{displaymath} which first translates the argument $y$ by the vector $x$ and then applies the linear transformation $g\in\GG$ to the translated vector. Fix a convex closed set $K$ in $\R^d$ which is distinct from the whole space. For a given set $A\subseteq \R^d$ consider the set $$ \conv_{K,\HH}(A):=\bigcap_{(x,g)\in\HH:\;A\subseteq g(K+x)} g(K+x), $$ where $g(B):=\{gz:z\in B\}$ and $B+x:=\{z+x:z\in B\}$, for $g\in\GG$, $x\in\R^d$, and $B\subseteq\R^d$. If there is no $(x,g)\in\HH$ such that $g(K+x)$ contains $A$, the intersection on the right-hand side is taken over the empty family and we stipulate that $\conv_{K,\HH}(A)=\R^d$. The set $\conv_{K,\HH}(A)$ is by definition the intersection of all images of $K$ under affine transformations from $\HH$, which contain $A$. We call the set $\conv_{K,\HH}(A)$, which is easily seen to be closed and convex, the $(K,\HH)$-hull of $A$; the set $A$ is said to be $(K,\HH)$-convex if it coincides with its $(K,\HH)$-hull. This definition of the convex hull fits the abstract convexity concept described in \cite{MR3767739}. In what follows we assume that $\HH$ contains the pair $(0,\II)$, where $\II$ is the unit matrix. If $A\subset K$, then \begin{equation} \label{eq:conv_KH_in_K} \conv_{K,\HH}(A)\subset K. \end{equation} It is obvious that a larger family $\HH$ results in a smaller $(K,\HH)$-hull, that is, if $\HH\subset \HH_1$, then $$ \conv_{K,\HH_1}(A)\subseteq \conv_{K,\HH}(A),\quad A\subset \R^d. $$ In particular, if $\HH=\{0\}\times \{\II\}$ and $A\subset K$, then $$ \conv_{K,\{0\}\times \{\II\}}(A)=K, $$ so equality in \eqref{eq:conv_KH_in_K} is possible. Since $(K,\HH)$-hull is always a convex closed set which contains $A$, $$ \conv(A)\subseteq \conv_{K,\HH}(A),\quad A\subset \R^d, $$ where $\conv$ denotes the operation of taking conventional closed convex hull. If $K$ is a fixed closed half-space, $\HH=\R^d\times \so$, where $\so$ is the special orthogonal group, then $\conv_{K,\HH}(A)=\conv(A)$, since every closed half-space can be obtained as an image of a fixed closed half-space under a rigid motion. The equality here can also be achieved for arbitrary closed convex set $K$ and bounded $A\subset \R^d$ upon letting $\HH=\R^d\times \gl$, see Proposition~\ref{prop:largest_group} below. A nontrivial example, when $\conv_{K,\HH}(A)$ differs both from $\conv(A)$ and from $K$, is as follows. If $K$ is a closed ball of a fixed radius and $\HH=\R^d\times\{\II\}$, then $\conv_{K,\HH}(A)$ is known in the literature as the ball hull of $A$, and, more generally, if $K$ is an arbitrary convex body (that is, a convex compact set with nonempty interior) and $\HH=\R^d\times \{\II\}$, then $\conv_{K,\HH}(A)$ is called the $K$-hull of $A$, see \cite{jah:mar:ric17} and \cite{MarMol:2021}. It is also clear from the definition that further nontrivial examples could be obtained from this case by enlarging the family of linear transformations involved in $\HH$. In the above examples the set $\HH$ takes the form $\TT\times\GG$ for some $\TT\subset \R^d$ and $\GG\subset \gl$. We implicitly assume this, whenever we specify $\TT$ and $\GG$. Furthermore, many interesting examples arise if $\TT$ is a linear subspace of $\R^d$ and $\GG$ is a subgroup of $\gl$. The paper is organised as follows. In Section~\ref{sec:KH_hulls} we analyse some basic properties of $(K,\HH)$-hulls and show how various known hulls in {\it convex geometry} can be obtained as particular cases of our construction. However, the main focus in our paper will be put on probabilistic aspects of $(K,\HH)$-hulls. As for many other models of {\it stochastic geometry}, we shall study $(K,\HH)$-hulls of random samples from $K$ as the size of the sample tends to infinity. In Section~\ref{sec:random} we introduce a random closed set which can be thought of as a variant of the Minkowski difference between the set $K$ and the $(K,\HH)$-hull of a random sample from $K$. The limit theorems for this object are formulated and proved in Section \ref{sec:poiss-point-proc}. Various properties of the limiting random closed set are studied in Section~\ref{sec:zero-cells-matrix}. A number of examples for various choices of $K$ and $\HH$ are presented in Section~\ref{sec:examples}. Some technical results related mostly to the convergence in distribution of random closed sets are collected in the Appendix. \section{\texorpdfstring{$(K,\HH)$}{(K,H)}-hulls of subsets of \texorpdfstring{$\R^d$}{Rd}}\label{sec:KH_hulls} We first show that the $(K,\HH)$-hull is an idempotent operation. \begin{proposition} If $A\subset K$, then \begin{displaymath} \conv_{K,\HH}\big(\conv_{K,\HH}(A)\big)=\conv_{K,\HH}(A). \end{displaymath} \end{proposition} \begin{proof} We only need to show that the left-hand side is a subset of the right-hand one. Assume that $z$ belongs to the left-hand side, and $z\notin g(K+x)$ for at least one $(x,g)\in\HH$ such that $A\subset g(K+x)$. The latter implies $\conv_{K,\HH}(A)\subset g(K+x)$, so that $\conv_{K,\HH}(\conv_{K,\HH}(A))$ is also a subset of $g(K+x)$, which is a contradiction. \end{proof} It is easy to see that $\conv_{K,\HH}(K)=K$ and $\conv_{K,\HH}(A)$ is equal to the intersection of all $(K,\HH)$-convex sets containing $A$. For each $A\subset \R^d$, denote \begin{displaymath} K\ominus_{K,\HH} A:=\big\{(x,g)\in\HH: A\subset g(K+x)\big\}. \end{displaymath} If $\HH=\R^d\times\{\II\}$, the set $\big\{x\in\R^d: (-x,\II)\in (K\ominus_{K,\HH} A)\big\}$ is the usual Minkowski difference \begin{displaymath} K\ominus A:=\{x\in\R^d: A+x\subseteq K\} \end{displaymath} of $K$ and $A$, see p.~146 in~\cite{schn2}. By definition of $(K,\HH)$-hulls \begin{displaymath} \conv_{K,\HH}(A)=\bigcap_{(x,g)\in K\ominus_{K,\HH} A} g(K+x), \end{displaymath} and, therefore, $A$ is $(K,\HH)$-convex if and only if \begin{displaymath} A=\bigcap_{(x,g)\in K\ominus_{K,\HH} A} g(K+x). \end{displaymath} The following is a counterpart of Proposition 2.2 in \cite{MarMol:2021}. \begin{lemma}\label{lem:minkowski} For every $A\subset \R^d$, we have $$ K\ominus_{K,\HH} A=K\ominus_{K,\HH} \big(\conv_{K,\HH}(A)\big). $$ \end{lemma} \begin{proof} Since $A\subset \conv_{K,\HH}(A)$, the right-hand side is a subset of the left-hand one. Let $(x,g)\in K\ominus_{K,\HH} A$. Then $A\subseteq g(K+x)$, and, therefore, $\conv_{K,\HH}(A)\subseteq g(K+x)$. The latter means that $(x,g)\in K\ominus_{K,\HH} (\conv_{K,\HH}(A))$. \end{proof} Now we shall investigate how $\conv_{K,\HH}(A)$ looks for various choice of $K$ and $\HH$, in particular, how various known hulls (conventional, spherical, conical, etc.) can be derived as particular cases of our model. In order to proceed, we recall some basic notions of convex geometry. Let $K$ be a convex closed set and let \begin{displaymath} h(K,u):=\sup\big\{\langle x,u\rangle : x\in K\big\},\quad u\in\R^d \end{displaymath} denote the support function of $K$ in the direction $u$, where $\langle x,u\rangle$ is the scalar product. Put \begin{displaymath} \dom(K):=\big\{u\in\R^d : h(K,u)<\infty\big\} \end{displaymath} and note that $\dom(K)=\R^d$ for compact sets $K$. The cone $\dom(L)$ is sometimes called the barrier cone of $L$, see the end of Section~2 in \cite{Rockafellar:1972}. For $u\in \dom(K)$, let $H(K,u)$, $H^{-}(K,u)$ and $F(K,u)$ denote the support plane, supporting halfspace and support set of $K$ with outer normal vector $u\neq0$, respectively. Formally, \begin{displaymath} H(K,u):=\big\{x\in\R^d:\langle x,u\rangle = h(K,u)\big\},\quad H^{-}(K,u):=\big\{x\in\R^d:\langle x,u\rangle \leq h(K,u)\big\} \end{displaymath} and $F(K,u)=H(K,u)\cap K$. We shall also need notions of supporting and normal cones of $K$ at a point $v\in K$. The supporting cone at $v\in K$ is defined by \begin{displaymath} S(K,v):=\cl\Big(\bigcup_{\lambda>0}\lambda(K-v)\Big), \end{displaymath} where $\cl$ is the topological closure, see p.~81 in~\cite{schn2}. If $v\in F(K,u)$ for some $u\in \dom(K)$, then \begin{equation} \label{eq:cone_in_halfspace} S(K,v)+v\subseteq H^{-}(K,u). \end{equation} For $v$ which belong to the boundary $\partial K$ of $K$, the normal cone $N(K,v)$ to $K$ at $v$ is defined by $$ N(K,v):=\big\{u\in\R^d\setminus\{0\}:v\in H(K,u)\big\}\cup\{0\}. $$ \begin{proposition} \label{prop:cone} Let $K$ be a convex closed set, and let $\HH=\TT\times\GG$, where $\TT=\R^d$, and $\GG=\{\lambda\II:\lambda>0\}$ is the group of all scaling transformations. If $A\subset K$, then \begin{equation} \label{eq:7} \conv_{K,\HH}(A)=\bigcap_{x\in\R^d,v\in\partial K, A\subset S(K,v)+v+x} \big(S(K,v)+v+x\big), \end{equation} that is, $\conv_{K,\HH}(A)$ is the intersection of all translations of supporting cones to $K$ that contain $A$. \end{proposition} \begin{proof} If $A\subset \lambda K+x$, then $A\subset S(K,v)+v+x$ for any $v\in K$. Hence, we only need to show that the right-hand side of \eqref{eq:7} is contained in the left-hand one. Assume that $z\in S(K,v)+v+x$ for all $v\in\partial K$ and $x\in\R^d$ such that $A\subset S(K,v)+v+x$, but $z\notin \lambda_0 K+y_0$ for some $\lambda_0>0$ and $y_0\in\R^d$ with $A\subset \lambda_0 K+y_0$. By the separating hyperplane theorem, see Theorem~1.3.4 in \cite{schn2}, there exists a hyperplane $H_0\subset \R^d$ such that $\lambda_0 K+y_0\subset H^{-}_0$ and $z\in H_0^{+}$, where $H_0^{\pm}$ are the open half-spaces bounded by $H_0$. Let $u_0$ be the unit outer normal vector to $H_0^-$ and note that $u_0\in \dom(K)$. Choose an arbitrary $v_0$ from the support set $F(K,u_0)$. Since $\lambda_0 K=\lambda_0 (K-v_0)+\lambda_0 v_0\subset S(K,v_0)+\lambda_0 v_0$, we have $A\subset S(K,v_0)+\lambda_0 v_0+y_0$. However, \begin{align*} S(K,v_0)+\lambda_0 v_0+y_0 &=S(\lambda_0 K,\lambda_0 v_0)+\lambda_0 v_0+y_0\\ &\subseteq H^{-}(\lambda_0 K_0,u_0)+y_0 =H^{-}(\lambda_0 K_0+y_0,u_0)\subseteq H_0^{-}\cup H_0, \end{align*} where the penultimate inclusion follows from \eqref{eq:cone_in_halfspace}. Thus, $z\notin S(K,v_0)+v_0+x_0$ with $x_0=(\lambda_0-1)v_0+y_0$, and $S(K,v_0)+v_0+x_0$ contains $A$. The obtained contradiction completes the proof. \end{proof} \begin{corollary} \label{cor:smooth} If $K$ is a smooth convex body (meaning that the normal cone at each boundary point is one-dimensional), $\TT=\R^d$, and $\GG=\{\lambda\II:\lambda>0\}$ is the group of all scaling transformations, then $\conv_{K,\HH}(A)=\conv(A)$, for all $A\subset K$. \end{corollary} \begin{proof} Since $K$ is a smooth convex body, $\dom(K)=\R^d$ and its supporting cone at each boundary point is equal to the supporting half-space. The convex hull of $A$ is exactly the intersection of all such half-spaces. \end{proof} The next result formalises an intuitively obvious fact that the $(K,\HH)$-hull of a bounded set $A$ coincides with $\conv(A)$ for arbitrary $K$ provided $\HH$ is sufficiently rich, in particular, if $\HH=\R^d\times\gl$. \begin{proposition}\label{prop:largest_group} Let $K$ be a convex closed set, $\TT=\R^d$, and let $\GG$ be the group of all scaling and orthogonal transformations of $\R^d$, that is, $$ \GG=\big\{x\mapsto \lambda g(x) : \lambda>0,\; g\in\so\big\}, $$ where $\so$ is the special orthogonal group of $\R^d$. Then $\conv_{K,\HH}(A)=\conv(A)$ for all bounded $A\subset \R^d$. \end{proposition} \begin{proof} It is clear that $\conv(A)\subset \conv_{K,\HH}(A)$. In the following we prove the opposite inclusion. Since $A\subseteq g(K+x)$ if and only if $\conv(A)\subseteq g(K+x)$, we have $\conv_{K,\HH}(A) = \conv_{K,\HH}(\conv(A))$ and there is no loss of generality in assuming that $A$ is compact and convex. Further, by passing to subspaces, it is possible to assume that $K$ has a nonempty interior. Take a point $z\in \R^d\setminus A$. We need to show that there exists a pair $(x,g)\in \R^d\times \GG$ (depending on $z$) such that $z\notin g(K+x)$ and $A\subset g(K+x)$. By the separating hyperplane theorem, see Theorem~1.3.4 in \cite{schn2}, there exists a hyperplane $H\subset \R^d$ such that $A\subset H^{-}$ and $z\in H^{+}$. If $K$ is compact, Theorem~2.2.5 in~\cite{schn2} implies that the boundary of $K$ contains at least one point at which the supporting cone is a closed half-space. This holds also for convex closed $K$, which is not necessarily bounded, by taking intersections of $K$ with a growing family of closed Euclidean balls. Let $v\in \partial K$ be such a point. After applying appropriate translation $x_0$ and orthogonal transformation $g_0\in\so$, we may assume that the supporting cone $S(g_0 (K+x_0),g_0 (v+x_0))$ is the closure of $H^{-}$. Thus, $$ A\subseteq \bigcup_{\lambda>0}\lambda \big(g_0 (K-v)\big)\quad \text{and}\quad z\notin \bigcup_{\lambda>0}\lambda \big(g_0 (K-v)\big). $$ It remains to show that there exists a $\lambda_0>0$ such that $A\subseteq \lambda_0 (g_0 (K-v))$. Assume that for every $n\in\NN$ there exists an $a_{n}\in A$ such that $a_n\notin n(g_0 (K-v))$. Since $A$ is compact, there is a subsequence $(a_{n_j})$ converging to $a\in A\subset H^{-}$, as $j\to\infty$. Thus, there exists a $\lambda_0>0$ such that $a$ lies in the interior of $\lambda_0(g_0 (K-v))$. Hence, $a_{n_j}\in \lambda_0(g_0 (K-v))$ for all sufficiently large $j$, which is a contradiction. \end{proof} \begin{remark} For unbounded sets $A$ the claim of Proposition~\ref{prop:largest_group} is false in general. As an example, one can take $d=2$, $A$ is a closed half-space and $K$ is an acute closed wedge. Thus, $\conv_{K,\HH}(A)=\R^2$, whereas $\conv(A)=A$. \end{remark} \begin{proposition}\label{eq:ellipses} Let $\GG=\gl$ be the general linear group, $\TT=\{0\}$, and let $K=B_1$ be the unit ball in $\R^d$. Then, for arbitrary compact set $A\subset \R^d$, it holds $\conv_{K,\HH}(A) = \conv(A\cup \check{A})$, where $\check{A}=\{-z:z\in A\}$. \end{proposition} \begin{proof} The images of the unit ball under the elements of $\gl$ are all ellipsoids centered at $0$. Since each of these ellipsoids is origin symmetric and convex, it is clear that $\conv_{K,\HH}(A) \supseteq \conv(A\cup \check{A})$. Let us prove the converse inclusion. Since replacing $A$ by the convex compact set $\conv(A\cup \check{A})$ does not change its $(K,\HH)$-hull, it suffices to assume that $A$ is an origin symmetric convex compact set. Let us take some $z\notin A$. We need to construct an ellipsoid $F$ centered at the origin and such that $A\subset F$, whereas $z\notin F$. By the separating hyperplane theorem, see Theorem~1.3.4 in \cite{schn2}, there exists an affine hyperplane $H\subset \R^d$ such that $A\subset H^{-}$ and $z\in H^{+}$, where $H^{\pm}$ are open half-spaces bounded by $H$. Let $x=(x_1,\ldots, x_d)$ be the coordinate representation of a generic point in $\R^d$. After applying an orthogonal transformation, we may assume that the hyperplane $H$ is $\{x_1 = a\}$ for some $a>0$. Then, $A\subset \{|x_1|<a\}$, while $z\in \{x_1>a\}$. Hence, $A \subset \{x\in \R^d: |x_1|\leq a-\eps, x_2^2+\cdots + x_d^2 \leq R^2\}=:D$ for sufficiently small $\eps>0$ and sufficiently large $R>0$. Clearly, there is an ellipsoid $F$ centered at $0$, containing $D$ and contained in the strip $\{|x_1|<a\}$. By construction, we have $A\subset F$ and $z\notin F$, and the proof is complete. \end{proof} Our next example deals with conical hulls. \begin{proposition}\label{prop:conic_hull} Let $\TT=\{0\}$, $\GG=\so$ be the special orthogonal group, and let $K$ be the closed half-space in $\R^d$ such that $0\in\partial K$. If $A\subset K$, then $$ \conv_{K,\HH}(A)=\cl({\rm pos}(A)), $$ where $$ {\rm pos}(A):=\Big\{\sum_{i=1}^{m}\alpha_i x_i :\;\alpha_i\geq 0,\;x_i\in A,m=1,2,\ldots\Big\} $$ is the positive (or conical) hull of $A$. \end{proposition} \begin{proof} By definition, $\conv_{K,\HH}(A)$ is the intersection of all closed half-spaces which contain the origin on the boundary, because every such half-space is an image of $K$ under some orthogonal transformation. Since $\cl({\rm pos}(A))$ is the intersection of {\it all} convex closed cones which contain $A$, $\cl({\rm pos}(A))\subset \conv_{K,\HH}(A)$. On the other hand, every convex closed cone is the intersection of its supporting half-spaces, see Corollary~1.3.5 in \cite{schn2}. Since all these supporting half-spaces contain the origin on the boundary, $\cl({\rm pos}(A))$ is the intersection of {\it some} family of half-spaces containing the origin on the boundary, which means that $\conv_{K,\HH}(A)\subset \cl({\rm pos}(A))$. \end{proof} The next corollary establishes connections with a probabilistic model studied recently in \cite{kab:mar:tem:19}. We shall return to this model in Section~\ref{sec:examples}. Let \begin{equation} \label{eq:upper-ball} B_1^{+}:=\big\{(x_1,x_2,\ldots,x_d): x_1^2+\cdots+x_d^2\leq 1,x_1\geq 0\big\} \end{equation} be the unit upper half-ball in $\R^{d}$, and let \begin{displaymath} \mathbb{S}^{d-1}_{+} :=\big\{(x_1,x_2,\ldots,x_d):x_1^2+\cdots+x_d^2 = 1,x_1\geq 0\big\} \end{displaymath} be the unit upper $(d-1)$-dimensional half-sphere. Further, let $\pi:\R^d\setminus\{0\}\to \Sphere$ be the mapping $\pi(x)=x/\|x\|$. \begin{corollary} Let $K=B_1^{+}$, $\GG=\so$ and $\TT=\{0\}$. Then, for arbitrary $A\subset K$, it holds \begin{equation}\label{eq:cones1} \conv_{K,\HH}(A)=\cl({\rm pos}(A))\cap B_1. \end{equation} Furthermore, if $A\neq\{0\}$, then \begin{equation}\label{eq:cones2} \conv_{K,\HH}(A)\cap\Sphere=\cl\big({\rm pos}(A)\big)\cap \Sphere =\cl\big({\rm pos}(\pi(A\setminus\{0\}))\big)\cap \Sphere, \end{equation} which is the closed spherical hull of the set $\pi(A\setminus\{0\})\subset \Sphere$. \end{corollary} \begin{proof} Note that $g(B_1^{+})=g(H_0^{+})\cap B_1$, for every $g\in\so$, where $H_0^+$ is defined by \begin{equation} \label{eq:upper-half} H_0^+=\big\{(x_1,x_2,\ldots,x_d)\in\R^d : x_1\geq 0\big\}. \end{equation} Thus, \begin{align*} \conv_{K,\HH}(A)&=\bigcap_{g\in\so,A\subset g(B_1^{+})}g(B_1^{+})\\ &=\bigcap_{g\in\so,A\subset g(H_0^{+})} \left(g(H_0^{+})\cap B_1\right) =\bigg(\bigcap_{g\in\so,A\subset g(H_0^{+})}g(H_0^{+})\bigg)\cap B_1, \end{align*} where we have used that $A\subset B_1$. By Proposition~\ref{prop:conic_hull} the right-hand side is $\cl({\rm pos}(A))\cap B_1$. The first equation in \eqref{eq:cones2} is a direct consequence of \eqref{eq:cones1}, while the second one follows from ${\rm pos}(A)={\rm pos}(A\setminus\{0\})={\rm pos}(\pi(A\setminus\{0\}))$. \end{proof} \section{\texorpdfstring{$(K,\HH)$}{(K,H)}-hulls of random samples from \texorpdfstring{$K$}{K}}\label{sec:random} From now on we additionally assume that $K\in\sK^d_{(0)}$, that is, $K$ is a convex compact set in $\R^d$ which contains the origin in the interior. Fix a complete probability space $(\Omega,\salg,\P)$. For $n\in\NN$, let $\Xi_n:=\{\xi_1,\xi_2,\ldots,\xi_n\}$ be a sample of $n$ independent copies of a random variable $\xi$ uniformly distributed on $K$. Put \begin{displaymath} Q_n:=\conv_{K,\HH}(\Xi_n) \end{displaymath} and \begin{equation} \label{eq:4} \XX_{K,\HH}(\Xi_n):=\big\{(x,g)\in\HH:\Xi_n\subseteq g(K+x)\big\} =K\ominus_{K,\HH} \Xi_n=K\ominus_{K,\HH} Q_n, \end{equation} where the last equality follows from Lemma~\ref{lem:minkowski}. We start with a simple lemma which shows that, for every $n\in\NN$, $\XX_{K,\HH}(\Xi_n)$ is a random closed subset of $\HH$ equipped with the relative topology induced by $\RM$, see Appendix for the definition of a random closed set. Here and in what follows $\Matr$ denotes the set of $d\times d$ matrices with real entries. Note that $Q_n$ is a.s.~closed by definition as intersection of closed sets. \begin{lemma} \label{lemma:XXn} For all $n\in\NN$, $\XX_{K,\HH}(\Xi_n)$ is a random closed set in $\HH$. \end{lemma} \begin{proof} Let $\XX_\xi:=\{(x,g)\in\HH:\xi\in g(K+x)\}$. For each compact set $L\subset\HH$, we have \begin{equation}\label{eq:is_random_set} \big\{\omega\in\Omega:\;\XX_{\xi(\omega)}\cap L\neq\emptyset \big\} =\big\{\omega\in\Omega:\;\xi(\omega)\in LK \big\}, \end{equation} where $LK:=\{g(z+x):(x,g)\in L, z\in K\}$. Note that $LK$ is a compact set, hence it is Borel, and the event on the right-hand side of \eqref{eq:is_random_set} is measurable. Thus, in view of \eqref{eq:is_random_set}, $\XX_\xi$ is a random closed set in the sense of Definition~1.1.1 in \cite{mo1}. Hence, \begin{displaymath} \XX_{K,\HH}(\Xi_n)=\XX_{\xi_1}\cap\cdots\cap \XX_{\xi_n} \end{displaymath} is also a random closed set, being a finite intersection of random closed sets, see Theorem~1.3.25 on \cite{mo1}. \end{proof} We are interested in the asymptotic properties of $\XX_{K,\HH}(\Xi_n)$, as $n\to\infty$. Note that the sequence of sets $(Q_n)$ is increasing in $n$ and, for every $n\in\NN$, $P_n:=\conv(\Xi_n)\subseteq Q_n$. Since $P_n$ converges almost surely to $K$ in the Hausdorff metric, as $n\to\infty$, the sequence $(Q_n)$ also converges almost surely to $K$. Since the sequence of sets $(\XX_{K,\HH}(\Xi_n))$ is decreasing in $n$, \begin{equation} \label{eq:x_n_converges_to_k_minus_k} \XX_{K,\HH}(\Xi_n) \;\downarrow\; (K\ominus_{K,\HH} K)=\big\{(x,g)\in\HH: K\subseteq g(K+x) \big\} \quad \text{a.s. as}\quad n\to\infty. \end{equation} Since we assume $(0,\II)\in \HH$, the set $K\ominus_{K,\HH} K$ contains $(0,\II)$. However, the set $K\ominus_{K,\HH} K$ may contain other points, e.g., all $(0,g)\in\HH$ such that $K\subset gK$. It is natural to ask whether it is possible to renormalise, in an appropriate sense, the set $\XX_{K,\HH}(\Xi_n)$ such that it would converge to a random limit? Before giving a rigorous answer to this question we find it more instructive to explain our approach informally. While doing this, we shall also recollect necessary concepts, and introduce some further notation. First of all, note that $$ \XX_{K,\HH}(\Xi_n)=\XX_{K,\R^d\times \gl}(\Xi_n)\cap \HH\quad\text{and}\quad K\ominus_{K,\HH} K=(K\ominus_{K,\R^d\times \gl} K)\cap \HH. $$ Thus, we can first focus on the special case $\HH=\R^d\times \gl$ and then derive the corresponding result for arbitrary $\HH$ by taking intersections. Denote \begin{displaymath} \XX_n:=\XX_{K,\R^d\times \gl}(\Xi_n). \end{displaymath} In order to quantify the convergence in \eqref{eq:x_n_converges_to_k_minus_k} and derive a meaningful limit theorem for $\XX_n$, we shall pass to tangent spaces. The vector space $\Matr$ is a tangent space to the Lie group $\gl$ at $\II$ and is, actually, the Lie algebra of $\gl$. However, for our purposes the multiplicative structure of the Lie algebra is not needed and we use only its linear structure as of a vector space over $\R$. Let $$ \exp: \Matr\to \gl $$ be the standard matrix exponent, and let $\VV$ be a sufficiently small neighbourhood of $\II$ in $\gl$, where the exponent is bijective, see, for example, Theorem 2.8 in \cite{Hall:2003}. Finally, let $\log:\VV\to \Matr$ be its inverse and define mappings $\widetilde{\log}: \R^d \times \VV \to \R^d \times \log\VV$ and $\widetilde{\exp}: \R^d \times \log \VV \to \R^d \times \VV$ by $$ \widetilde{\log}(x,g)=(x,\log g), \quad \widetilde{\exp}(x,h)=(x,\exp h), \quad x\in\R^d,\; g\in\VV,\; h\in \log\VV. $$ Using the above notation, we can write \begin{multline*} \widetilde{\log}(\XX_n\cap (\R^d \times \VV)) =\big\{(x,C)\in \R^d\times \log \VV : \Xi_n\subseteq \exp(C)(K+x)\big\}\\ =\big\{(x,C)\in \RM: \Xi_n\subseteq \exp(C)(K+x)\big\}\cap (\R^d\times \log \VV) =\fX_n \cap (\R^d\times \log \VV), \end{multline*} where we set $$ \fX_n:=\big\{(x,C)\in\RM:\Xi_n\subseteq \exp(C)(K+x) \big\}. $$ In the definition of $\fX_n$ the space $\RM$ should be regarded as a tangent vector space at $(0,\II)$ to the Lie group of all invertible affine transformations of $\R^d$. Similarly to Lemma~\ref{lemma:XXn}, it is easy to see that $\fX_n$ is a random closed set in $\RM$. Note that $\fX_n$ may be unbounded (in the product of the standard norm on $\R^d$ and some matrix norm on $\Matr$) and, in general, is not convex. We shall prove below, see Theorem~\ref{thm:main1}, that the sequence $(n\fX_n)$ converges in distribution to a nondegenerate random set $\check{\fZ}_K=\{-z:z\in\fZ_K\}$ as random closed sets, see the Appendix for necessary formalities. We pass from the random set $\fZ_K$ defined at \eqref{thm:main1:claim} to its reflected variant to simplify later notation. Moreover, for arbitrary convex compact subset $\fK$ in $\RM$ which contains the origin, the sequence of random sets $(n\fX_n\cap \fK)$ converges in distribution to $\check{\fZ}_K\cap \fK$ on the space of compact subsets of $\RM$ endowed with the usual Hausdorff metric. Since $\VV$ contains the origin in its interior, there exists an $n_0\in\NN$ such that \begin{equation}\label{eq:compact_inside_neihgborhood} n(\R^d\times \log \VV)\supseteq \fK \quad\text{and}\quad \R^d\times \VV\supseteq \widetilde{\exp}(\fK/n),\quad n\geq n_0. \end{equation} Hence, $$ n\fX_n\cap \fK=n\big(\fX_n \cap (\R^d\times \log \VV)\big) \cap \fK,\quad n\geq n_0. $$ and, therefore, $n\,\widetilde{\log}(\XX_n\cap (\R^d \times \VV))\cap \fK$ converges in distribution to $\check{\fZ}_K\cap \fK$, as $n\to\infty$. In particular, the above arguments show that the limit does not depend on the choice of $\VV$. Let us now explain the case of arbitrary $\HH\subset \R^d\times\gl$ containing $(0,\II)$ and introduce assumptions that we shall impose on $\HH$. Assume that there exist the following objects: \begin{itemize} \item a neighbourhood $\mathfrak{U}\subset \R^d\times \log\VV$ of $(0,0)$ in $\RM$; \item a neighbourhood $\mathbb{U}\subset \R^d\times \VV$ of $(0,\II)$ in $\R^d\times \gl$; \item a convex closed cone $\fC_{\HH}$ in $\RM$ with the apex at $(0,0)$; \end{itemize} such that \begin{equation}\label{eq:H_condition} \HH\cap \mathbb{U}=\widetilde{\exp}(\fC_{\HH}\cap \mathfrak{U}). \end{equation} Informally speaking, condition \eqref{eq:H_condition} means that locally around $(0,\II)$ the set $\HH$ is an image of a convex cone in the tangent space $\RM$ under the extended exponential map $\widetilde{\exp}$. The most important particular cases arise when $\HH$ is the product of a linear space $\TT$ in $\R^d$ and a Lie subgroup $\GG$ of $\gl$. In this situation $\fC_\HH$ is a linear subspace of $\RM$ which is the direct sum of $\TT$ and the Lie algebra $\fG$ of $\GG$. Furthermore, $\fC_\HH$ is a tangent space to $\HH$ (regarded as a product of smooth manifolds) at $(0,\II)$. In a more general class of examples, we allow $\fC_\HH$ to be the direct sum of $\TT$ and an arbitrary linear subspace of $\Matr$, which is not necessarily a Lie algebra. In the latter case, the second component of $\HH$ is not a Lie subgroup of $\GG$. Furthermore, $\fC_{\HH}$ can be a proper cone, that is, not a linear subspace, so the second components of $\HH$ do not form a group as well. For example, assume that $d=2$ and $\HH=\{0\}\times \GG$, where \begin{displaymath} \GG=\left\{ \begin{pmatrix} \lambda_1 & 0\\ 0 & \lambda_2 \end{pmatrix},\lambda_1,\lambda_2\in (0,1]\right\}. \end{displaymath} Then \eqref{eq:H_condition} holds for appropriate $\mathbb{U}$ and $\mathfrak{U}$ upon choosing \begin{displaymath} \fC_{\HH}=\{0\}\times \left\{ \begin{pmatrix} \mu_1 & 0\\ 0 & \mu_2 \end{pmatrix},\mu_1,\mu_2\leq 0\right\}. \end{displaymath} This example is important for the analysis of $(K,\HH)$-hulls because we naturally want to exclude transformations that enlarge $K$. Examples of a different kind, where $\HH$ is not a Lie subgroup, arise by taking $\fC_{\HH}$ to be an arbitrary linear subspace of $\RM$ which is not a Lie subalgebra. For an arbitrary convex compact subset $\fK$ of $\RM$ which contains the origin, the set $\fK\cap\fC_{\HH}$ is also convex compact and contains the origin. Furthermore, there exists an $n_0\in\NN$ such that $$ \fK\cap \fC_{\HH} \subset n(\fC_{\HH}\cap \mathfrak{U}) =n\,\widetilde{\log}(\HH\cap \mathbb{U}) \subset n\,\widetilde{\log}(\HH\cap (\R^d\times \VV)),\quad n\geq n_0. $$ Since $\fK\cap \fC_{\HH}$ is a convex compact set which contains the origin, \begin{align*} &\hspace{-1cm}n\,\widetilde{\log}\left(\XX_{K,\HH}(\Xi_n) \cap (\R^d\times \VV)\right)\cap(\fK\cap \fC_{\HH})\\ &=n\,\widetilde{\log}\left(\XX_n \cap (\HH \cap (\R^d\times \VV))\right)\cap (\fK\cap \fC_{\HH} )\\ &=n\fX_n \cap n\widetilde{\log}(\HH \cap (\R^d\times \VV)) \cap (\fK\cap \fC_{\HH} )=n\fX_n \cap (\fK\cap \fC_{\HH}) \end{align*} converges to $\check{\fZ}_K\cap \fK\cap \fC_{\HH}$, as $n\to\infty$. The limit here is also independent of $\VV$. Let us make a final remark in this informal discussion by connecting the convergence of the sequence $(n\,\widetilde{\log}\left(\XX_n\cap (\R^d\times \VV)\right))$ and relation \eqref{eq:x_n_converges_to_k_minus_k}. The above argument demonstrates that $\check{\fZ}_K$ necessarily contains a nonrandom set \begin{align} \label{eq:rec_cone_of_fZ} R_K:&=\liminf_{n\to\infty} \left(n\widetilde{\log} ((K\ominus_{K,\R^d\times\gl}K)\cap (\R^d\times \VV))\right)\notag \\ &=\bigcup_{k\geq 1}\bigcap_{n\geq k}\bigcap_{y\in K} \left\{(x,C)\in \RM: y\in \exp(C/n)(K+x/n)\right\}, \end{align} which is unbounded. As we shall show, the set $R_{K}$ is, indeed, contained in the recession cone of $\check{\fZ}_K$ which we identify in Proposition~\ref{prop:rec_cone} below. \section{Limit theorems for \texorpdfstring{$\XX_{K,\HH}(\Xi_n)$}{X{K,H}(Xi n)}} \label{sec:poiss-point-proc} Recall that $N(K,x)$ denotes the normal cone to $K$ at $x\in\partial K$, where $K\in\sK_{(0)}^d$. By $\Nor(K)$ we denote the normal bundle, that is, a subset of $\partial K\times\Sphere$, which is the family of $(x,N(K,x)\cap \Sphere)$ for $x\in\partial K$. It is known, see p.~84 in \cite{schn2}, that $K$ has the unique outer unit normal $u_K(x)$ at $x\in\partial K$ for almost all points $x$ with respect to the $(d-1)$-dimensional Hausdorff measure $\sH^{d-1}$. Denote the set of such points by $\Sigma_1(K)$, so $\Sigma_1(K):=\{x\in\partial K: {\rm dim}\;N(K,x)=1\}$. Let $\Theta_{d-1}(K,\cdot)$ be the generalised curvature measure of $K$, see Section~4.2 in~\cite{schn2}. The following formula, which is a consequence of Theorem 3.2 in \cite{Hug:1998}, can serve as a definition and is very convenient for practical purposes. If $W$ is a Borel subset of $\R^{d}\times\Sphere$, then $$ \Theta_{d-1}\big(K,(\partial K\times \Sphere )\cap W\big) =\int_{\Sigma_1(K)} \one_{\{(x,u_K(x))\in W \}} {\rm d}\sH^{d-1}(x). $$ In particular, this formula implies that the support of $\Theta_{d-1}(K,\cdot)$ is a subset of $\Nor(K)$ and its total mass is equal to the surface area of $K$. Let $\sP_K:=\sum_{i\geq 1}\delta_{(t_i,\eta_i,u_i)}$ be the Poisson process on $(0,\,\infty)\times \Nor(K)$ with intensity measure $\mu$ being the product of Lebesgue measure on $(0,\infty)$ normalised by $V_d(K)^{-1}$ and the measure $\Theta_{d-1}(K,\cdot)$. If $K$ is strictly convex, $\sP_K$ can be equivalently defined as a Poisson process $\{(t_i,F(K,u_i),u_i), i\geq1\}$, where $\{(t_i,u_i),i\geq1\}$ is the Poisson process on $(0,\infty)\times\Sphere$ with intensity being the product of the Lebesgue measure on the half-line normalised by $V_d(K)^{-1}$ and the surface area measure $S_{d-1}(K,\cdot):=\Theta_{d-1}(K,\R^d\times\cdot)$ of $K$. The notion of convergence of random closed sets in distribution with respect to the Fell topology is recalled in the Appendix. For $A\in\RM$, denote by $$ \check{A}:=\big\{(-x,-C):(x,C)\in A\big\} $$ the reflection of $A$ with respect to the origin in $\RM$. \begin{theorem}\label{thm:main1} Assume that $K\in\sK^d_{(0)}$, and let $\fF$ be a convex closed set in $\RM$ which contains the origin. The sequence of random closed sets $((n\fX_n)\cap\fF)_{n\in\NN}$ converges in distribution in the space of closed subsets of $\RM$ endowed with the Fell topology to a random convex closed set $\check{\fZ}_K\cap\fF$, where \begin{equation} \label{thm:main1:claim} \fZ_{K}:=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in \RM:\langle C \eta+x,u\rangle \leq t\Big\}. \end{equation} \end{theorem} \begin{remark} While $\fX_n$ is not convex in general, the set $\check{\fZ}_K$ from \eqref{thm:main1:claim} is almost surely convex as an intersection of convex sets. \end{remark} Letting $\fF=\RM$ in Theorem~\ref{thm:main1} shows that $n\fX_n$ converges in distribution to $\check{\fZ}_K$. If $\fF=\fK$ is a convex compact set which contains the origin in $\RM$, the theorem covers the setting of Section~\ref{sec:random}. Taking into account the discussion there, we obtain the following. \begin{corollary} Assume that $K\in\sK^d_{(0)}$, and let $\HH$ be a subset of $\R^d\times \gl$ which satisfies \eqref{eq:H_condition}. Then, the sequence of random closed sets \begin{equation} \label{eq:xnH} n\,\widetilde{\log}\left(\XX_{K,\HH}(\Xi_n)\cap (\R^d\times \VV)\right)\cap \fC_{\HH},\quad n\in\NN, \end{equation} converges in distribution in the space of closed subsets of $\RM$ endowed with the Fell topology to a random convex closed set $\check{\fZ}_{K}\cap \fC_{\HH}$. \end{corollary} The subsequent proof of Theorem~\ref{thm:main1} heavily relies on a series of auxiliary results on the properties of the Fell topology and convergence of random closed sets, which are collected in the Appendix. We encourage the readers to acquaint themselves with the Appendix before proceeding further. We start the proof of Theorem~\ref{thm:main1} with an auxiliary result which is an extension of Theorem~5.6 in \cite{MarMol:2021}. Let $\sK^d_0$ be the space of convex compact sets in $\R^d$ containing the origin and endowed with the Hausdorff metric. Let $L^{o}$ denote the polar set to a convex closed set $L$, that is, \begin{equation}\label{eq:polar} L^o:=\{x\in\R^d: h(L,x)\leq 1\}. \end{equation} In what follows we shall frequently use the relation \begin{equation}\label{eq:half-space_polar} [0,t^{-1}u]^o=H^{-}_{u}(t),\quad t>0,\quad u\in\Sphere, \end{equation} where $$ H^{-}_{u}(t):=\{x\in\R^d:\langle x,u\rangle\leq t\},\quad t\in\R,\quad u\in\Sphere. $$ From Theorem~5.6 in \cite{MarMol:2021} we know that \begin{displaymath} \sum_{k=1}^{n}\delta_{n^{-1}(K-\xi_k)^{o}}\;\dodn\; \sum_{(t,\eta,u)\in\sP_K}\delta_{[0,\,t^{-1}u]} \quad \text{as}\quad n\to\infty, \end{displaymath} where the convergence is understood as the convergence in distribution on the space of point measures on\footnote{It would be more precise to write $\sK^d_0\setminus\{\{0\}\}$ instead of $\sK^d_0\setminus\{0\}$, but we prefer the latter notation for the sake of notational simplicity.} $\sK^d_0\setminus\{0\}$ endowed with the vague topology. The limiting point process consists of random segments $[0,\,x]$ with $x=t^{-1}u$ derived from the first and third coordinates of $\sP_K$. Regarding $\xi_k$ as a mark of $n^{-1}(K-\xi_k)^{o}$ for $k=1,\ldots,n$ we have the following convergence of marked point processes, which strengthens the above mentioned result from \cite{MarMol:2021}. \begin{lemma}\label{lem:marked_point_processes} Assume that $K\in\sK_{(0)}^d$. Then \begin{equation} \label{eq:17} \sum_{k=1}^{n}\delta_{(n^{-1}(K-\xi_k)^{o},\,\xi_k)} \;\dodn\; \sum_{(t,\eta,u)\in \sP_K}\delta_{([0,t^{-1}u],\eta)} \quad \text{as}\quad n\to\infty, \end{equation} where the convergence is understood as the convergence in distribution on the space of point measures on $(\sK^d_0\setminus\{0\})\times \R^d$ endowed with the vague topology. \end{lemma} \begin{proof} Let $p(\partial K,\cdot)$ be the metric projection on $K$, that is, $p(\partial K,x)$ is the set of closest to $x$ points on $\partial K$. We start by noting that for the limiting Poisson process the following equality holds for all $L\in\sK^d_0\setminus\{0\}$ and every Borel $R\subseteq\R^d$ \begin{align*} \P\big\{[0,t^{-1}u]\subset L \;\text{or}\; & \eta\notin p(\partial K,R)\text{ for all } (t,\eta,u)\in\sP_K\big\}\\ &=\exp\big(-\mu(\{(t,\eta,u): [0,t^{-1}u]\not\subset L, \eta\in p(\partial K,R)\}\big)\\ &=\exp\big(-\mu(\{(t,\eta,u): L^{o}\not\subset H^{-}_{u}(t), \eta\in p(\partial K,R)\}\big)\\ &=\exp\big(-\mu(\{(t,\eta,u): h(L^o,u)>t, \eta\in p(\partial K,R)\}\big)\\ &=\exp\Big(-\frac{1}{V_d(K)} \int_{\Nor(K)} \one_{\{x\in p(\partial K,R)\}}h(L^o,u) \Theta_{d-1}(K,{\rm d}x\times {\rm d}u)\Big). \end{align*} According to Proposition~\ref{prop:marked-sets} in the Appendix, see, in particular, Eq.~\eqref{eq:10v}, we need to show that, for every $L\in\sK^d_0\setminus\{0\}$ and every Borel $R\subseteq\R^d$, as $n\to\infty$, \begin{equation}\label{eq:kid_rataj1} n\P\{n^{-1}(K-\xi)^{o}\not\subseteq L,\xi\in R\} \;\longrightarrow\; \frac{1}{V_d(K)}\int_{\Nor(K)}\one_{\{x\in p(\partial K,R)\}} h(L^{o},u)\Theta_{d-1}(K,{\rm d}x\times{\rm d}u). \end{equation} Note that \begin{align*} \P\{n^{-1}(K-\xi)^{o} &\not\subseteq L,\xi\in R\} =\P\{n^{-1}L^{o}\not\subseteq (K-\xi),\xi\in R\}\\ &=\P\{\xi\not\in K\ominus n^{-1}L^{o},\xi\in R\} =\frac{V_d(R\cap (K\setminus(K\ominus n^{-1}L^{o})))}{V_d(K)}. \end{align*} Applying Theorem~1 in \cite{kid:rat06} with $C=p(\partial K,R)$, $A=K$, $P=B=W=\{0\}$, $Q=-(L^{o})$ and $\eps=n^{-1}$, we obtain \eqref{eq:kid_rataj1}. The proof is complete. \end{proof} Applying continuous mapping theorem to convergence~\eqref{eq:17} and using Lemma~\ref{lemma:polar-map}(ii), we obtain the convergence of marked point processes \begin{equation} \label{eq:marked_point_processes_conv2} \sum_{k=1}^{n}\delta_{(n(K-\xi_k),\,\xi_k)}\; \dodn\;\sum_{(t,\eta,u)\in\sP_K}\delta_{(H^{-}_{u}(t),\,\eta)} \quad \text{as}\quad n\to\infty. \end{equation} \begin{proof}[Proof of Theorem \ref{thm:main1}] According to Lemma~\ref{lemma:restriction} in the Appendix it suffices to show that $(n\fX_n)\cap\fF\cap\fLoc$ converges to $\check{\fZ}_K\cap \fF\cap \fLoc$ for arbitrary compact convex subset $\fLoc$ of $\RM$, which contains the origin in its interior and then pass to the limit $\fLoc\uparrow (\RM)$. It holds \begin{align*} (n\fX_n)\cap\fLoc &=\Big\{(x,C)\in\fLoc:\Xi_n\subseteq \exp(C/n)(K+x/n) \Big\}\\ &=\bigcap_{k=1}^{n}\Big\{(x,C)\in\fLoc:\xi_k\in \exp(C/n)(K+x/n) \Big\}\\ &=\bigcap_{k=1}^{n}\Big\{(x,C)\in\fLoc:\exp(-C/n) \xi_k\in K+x/n \Big\}\\ &=\bigcap_{k=1}^{n}\Big\{(x,C)\in\fLoc:\big(n(\exp(-C/n)- \II)\big)\xi_k-x \in n(K-\xi_k)\Big\}. \end{align*} Let \begin{displaymath} a_m:=\sup_{n\geq m}\sup_{(x,C)\in\fLoc,y\in K} \big\|\big(n(\exp(-C/n)- \II\big)y+Cy \big\|,\quad m\in\NN. \end{displaymath} Note that $a_m\to0$, as $m\to\infty$, because $n\big(\exp(-C/n)-\II\big)\to -C$ locally uniformly in $C$, as $n\to\infty$, the set $\fLoc$ is compact in $\RM$, and $K$ is compact in $\R^d$. Let $B_{a_m}$ be the closed ball of radius $a_m$ in $\R^d$ centred at the origin. For each $m\in\NN$ and $n\geq m$, we have \begin{equation} \label{eq:5} \fY_{m,n}^-\subset \big((n\fX_n)\cap\fLoc\big) \subset\fY_{m,n}^+, \end{equation} where \begin{displaymath} \fY_{m,n}^+:=\bigcap_{k=1}^{n}\big\{(x,C)\in\fLoc: -C\xi_k-x \in n(K-\xi_k)+B_{a_m}\big\} \end{displaymath} and \begin{equation} \label{eq:minus} \fY_{m,n}^-:=\bigcap_{k=1}^{n}\big\{(x,C)\in\fLoc: -C\xi_k-x +B_{a_m} \subset n(K-\xi_k)\big\}. \end{equation} The advantage of lower and upper bounds in \eqref{eq:5} is the convexity of $\fY_{m,n}^{\pm}$, which makes their analysis simpler. We aim to apply Lemma~\ref{lemma:approx} from the Appendix with $Y_{n,m}^{\pm}=\fY_{m,n}^{\pm}\cap \fF$ and $X_n=(n\fX_x)\cap \fLoc\cap \fF$. Let us start with analysis of $\fY_{m,n}^{+}$ which is simpler. Let $\fL$ be a compact subset of $\fLoc$. Denote \begin{align*} M_m^+(\fL)&:=\big\{(L,y)\in \sK_0^d\times\R^d: -C y-x \in L^o+B_{a_m}\;\text{for all}\; (x,C)\in\fL\big\},\\ M_m^-(\fL)&:=\big\{(L,y)\in \sK_0^d\times\R^d: -C y-x + B_{a_m}\subset L^o \;\text{for all}\; (x,C)\in\fL\big\},\\ M(\fL)&:=\big\{(L,y)\in \sK_0^d\times\R^d: -C y-x \in L^o \;\text{for all}\; (x,C)\in\fL\big\}. \end{align*} Then \begin{displaymath} \Prob{\fL\subset \fY_{m,n}^{\pm}} =\Prob{(n^{-1}(K-\xi_i)^o,\xi_i)\in M_m^{\pm}(\fL),i=1,\dots,n}. \end{displaymath} By Lemma~\ref{lem:marked_point_processes}, the point process $\{(n^{-1}(K-\xi_i)^o,\xi_i),i=1,\dots,n\}$ converges in distribution to the Poisson process $\big\{([0,t^{-1}u],\eta):(t,\eta,u)\in\sP_K\big\}$. The sets $M(\fL)$, $M_m^{\pm}(\fL)$ are continuity sets for the distribution of the limiting Poisson process. Indeed, for each $(t,\eta,u)\in(0,\infty)\times\Nor(K)$, \begin{align*} &\Big\{([0,t^{-1}u],\eta)\in \partial M_m^+(\fL)\Big\}\\ &=\Big\{-C \eta-x\in H^{-}_{u}(t+a_m)\; \text{for all}\; (x,C)\in\fL \Big\} \setminus \Big\{-C \eta-x\in \Int H^{-}_{u}(t+a_m)\;\text{for all}\; (x,C)\in\fL \Big\}\\ &=\Big\{\langle -C\eta-x,u\rangle \leq t+a_m \;\text{for all}\; (x,C)\in\fL \Big\} \setminus \Big\{\langle -C\eta-x,u\rangle< t+a_m \;\text{for all}\; (x,C)\in\fL \Big\}\\ &=\Big\{\langle -C\eta-x,u\rangle \leq t+a_m \;\text{for all}\; (x,C)\in\fL\; \text{and}\;\langle -C\eta-x,u\rangle = t+a_m \;\text{for some}\; (x,C)\in\fL \Big\}, \end{align*} where $\Int$ denotes the topological interior. Since the probability of the latter event for some $(t,\eta,u)\in\sP_K$ vanishes, it follows that $M_m^{+}(\fL)$ is a continuity set for the Poisson point process $\{([0,t^{-1}u],\eta):(t,\eta,u)\in\sP_K\}$. Letting $a_m=0$, we obtain that $M(\fL)$ is also a continuity set. The argument for $M_m^-(\fL)$ is similar by replacing $a_m$ with $(-a_m)$. Thus, for all $m\in\NN$, \begin{displaymath} \Prob{\fL\subset \fY_{m,n}^+} \to \Prob{\big\{([0,t^{-1}u],\eta):(t,\eta,u)\in\sP_K\big\}\subset M_m^+(\fL)} =\Prob{\fL\subset \fY_m^{+}}\quad \text{as }\quad n\to\infty, \end{displaymath} where \begin{displaymath} \fY_m^+:=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in\fLoc:-C\eta-x \in H^-_u(t+a_m) \Big\}. \end{displaymath} The random closed sets $\fY_{m,n}^+$ and $\fY_m^+$ are convex and almost surely contain a neighbourhood of the origin in $\RM$, hence, are regular closed, see the Appendix for the definition. By Theorem~\ref{thm:inclusion} applied to the space $\RM$, the random convex set $\fY_{m,n}^+$ converges in distribution to $\fY_m^+$. Since $\fY_{m,n}^+\dodn \fY_m^+$, as $n\to\infty$, and the involved sets almost surely contain the origin in their interiors, Corollary~\ref{cor:intersection} yields that $(\fY_{m,n}^+\cap\fF)\dodn (\fY_m^+\cap\fF)$, as $n\to\infty$, for each convex closed set $\fF$ which contains the origin in $\RM$ (not necessarily as an interior point). Thus, we have checked part (i) of Lemma~\ref{lemma:approx}. We proceed with checking part (ii) of Lemma~\ref{lemma:approx} with $Y_m^{-}=\fY_m^-$, where \begin{displaymath} \fY_m^-:=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in\fLoc:-C\eta-x \in H^-_u(t-a_m) \Big\}. \end{displaymath} Note that the random sets $\fY_{m,n}^-$ and $\fY_m^-$ may be empty and otherwise not necessarily contain the origin. We need to check~\eqref{eq:11}, which in our case reads as follows \begin{equation}\label{eq:main_theorem_proof1} \Prob{\fY_{m,n}^-\cap\fF\cap\fL\neq\emptyset,0\in \fY_{m,n}^-} \to \Prob{\fY_{m}^-\cap\fF\cap\fL\neq\emptyset,0\in \fY_{m}^-} \quad\text{as}\quad n\to\infty, \end{equation} for all compact sets $\fL$ which are continuity sets of $\fY_{m}^-\cap\fF$. We shall prove~\eqref{eq:main_theorem_proof1} for all compact sets $\fL$ in $\RM$. To this end, we shall employ Lemma~\ref{lemma:intersection} and divide the derivation~\eqref{eq:main_theorem_proof1} into several steps, each devoted to checking one condition of Lemma~\ref{lemma:intersection}. \noindent {\sc Step 1.} Let us check that, for sufficiently large $n\in\NN$, \begin{displaymath} \Prob{(0,0)\in \fY_{m,n}^-}=\Prob{(0,0)\in \Int\fY_{m,n}^-}>0,\quad m\in\NN. \end{displaymath} Since the interior of a finite intersection is intersection of the interiors, and using independence, it suffices to check this for each of the set which appear in the intersection in~\eqref{eq:minus}. If $(0,0)$ belongs to $Y_k:=\big\{(x,C)\in\fLoc: -C\xi_k-x +B_{a_m} \subset n(K-\xi_k)\big\}$, then $B_{a_m} \subset n(K-\xi_k)$. Since $\xi_k$ is uniform on $K$, we have \begin{displaymath} \Prob{B_{a_m} \subset n(K-\xi_k)}=\Prob{B_{a_m} \subset n\Int(K-\xi_k)} \end{displaymath} for all $n$. If $B_{a_m} \subset n\Int(K-\xi_k)$, then $-C\xi_k-x +B_{a_m} \subset n(K-\xi_k)$ for all $x$ and $C$ from a sufficiently small neighbourhood of the origin in $\RM$. Furthermore, \begin{displaymath} \Prob{(0,0)\in \fY_{m,n}^-}=\Prob{B_{a_m} \subset n(K-\xi_k),k=1,\dots,n} =\Prob{B_{a_m/n}\subset K\ominus\Xi_n}>0 \end{displaymath} for all sufficiently large $n$. \noindent {\sc Step 2.} Let us check that, for each $m\in\NN$, \begin{displaymath} \Prob{(0,0)\in \fY_{m}^-}=\Prob{(0,0)\in \Int\fY_{m}^-}>0. \end{displaymath} The equality above follows from the observation that the origin lies on the boundary of $\big\{(x,C)\in\fLoc:-C\eta-x \in H^-_u(t-a_m) \big\}$ only if $t=a_m$, which happens with probability zero. Furthermore, $(0,0)\in \fY_{m}^-$ if $t\geq a_m$ for all $(t,\eta,u)\in\sP_K$, which has positive probability. {\sc Step 3.} By a similar argument as we have used for $\fY_{m,n}^+$, for every compact subset $\fL$ of $\fLoc$ and $m\in\NN$, it holds $$ \Prob{\fL\subset \fY_{m,n}^-} \to \Prob{\big\{([0,t^{-1}u],\eta):(t,\eta,u)\in\sP_K\big\} \subset M_m^-(\fL)}=\Prob{\fL\subset \fY_m^{-}} \quad \text{as }\quad n\to\infty. $$ Summarising, we have checked all conditions of Lemma~\ref{lemma:intersection}. This finishes the proof of~\eqref{eq:main_theorem_proof1} and shows that all the condition of part (ii) of Lemma~\ref{lemma:approx} hold. It remains to note that \begin{equation*} (\fY_m^+\cap\fF)\downarrow (\check{\fZ}_K\cap\fLoc\cap\fF), \quad(\fY_{m}^-\cap\fF)\uparrow(\check{\fZ}_K\cap\fLoc\cap\fF) \quad \text{a.s. as}\quad m\to\infty, \end{equation*} in the Fell topology, and $$ \lim_{m\to\infty}\Prob{0\in \fY_{m}^-}=1. $$ Thus, by Lemma~\ref{lemma:approx} $(n\fX_n)\cap\fLoc\cap\fF$ converges in distribution to $\check{\fZ}_K\cap\fLoc\cap\fF$, as $n\to\infty$. By Lemma~\ref{lemma:restriction}, $(n\fX_n)\cap\fF$ converges in distribution to $\check{\fZ}_K\cap\fF$. \end{proof} \section{Properties of the set \texorpdfstring{$\fZ_K$}{ZK}} \label{sec:zero-cells-matrix} \subsection{Boundedness and the recession cone} The random set $\fZ_{K}$ is a subset of the product space $\RM$. The latter space can be turned into the real Euclidean vector space with the inner product given by \begin{displaymath} \langle (x,C_1), (y,C_2) \rangle_1 :=\langle x,y\rangle +\Tr (C_1C_2^{\top}), \quad x,y\in\R^d,\quad C_1,C_2\in\Matr, \end{displaymath} where $\Tr$ denotes the trace of a square matrix and $A^{\top}$ is the transpose of $A\in\Matr$. In terms of this inner product the set $\fZ_K$ can be written as \begin{equation}\label{eq:zero_cell_general} \fZ_{K}=\bigcap_{(t,\eta,u)\in\sP_K}\Big\{(x,C)\in\RM :\langle (x,C),(u,\eta\otimes u)\rangle_1\leq t\Big\} =\bigcap_{(t,\eta,u)\in\sP_K}H^{-}_{(u,\eta\otimes u)}(t), \end{equation} where $H^{-}_{(u,\eta\otimes u)}(t)$ is a closed half-space of $\RM$ containing the origin, and $\eta\otimes u$ is the tensor product of $\eta$ and $u$. The boundaries of $H^{-}_{(u,\eta\otimes u)}(t)$, $(t,\eta,u)\in\sP_K$, constitute a Poisson process on the affine Grassmannian of hyperplanes in $\RM$ called a Poisson hyperplane tessellation. The random set obtained as the intersection of the half-spaces $H^{-}_{(u,\eta\otimes u)}(t)$, $(t,\eta,u)\in\sP_K$, is called the zero cell, see Section~10.3 in~\cite{sch:weil08}. The intensity measure of this tessellation is the measure on the affine Grassmannian obtained as the product of the Lebesgue measure on $\R_+$ (normalised by $V_d(K)$) and the measure $\nu_K$ obtained as the push-forward of the generalised curvature measure $\Theta_{d-1}(K,\cdot)$ under the map $\Nor(K)\ni (x,u)\mapsto (u,x\otimes u)\in \RM$. If, for example, $K=B_1$ is the unit ball, $\nu_K$ is the push-forward of the $(d-1)$-dimensional Hausdorff measure on the unit sphere $\Sphere$ by the map $u\mapsto (u,u\otimes u)$. For a strictly convex and smooth body $K$, the positive cone generated by $\{x\otimes u: (x,u)\in \Nor(K)\}$ is called the normal bundle cone of $K$, see \cite{gruber2014normal}. Note that the set $\fZ_K$ is almost surely convex, closed and unbounded. Thus, it is natural to consider the recession cone of $\fZ_K$ which is formally defined as $$ {\rm rec}(\fZ_K):=\big\{(x,C)\in \RM: (x,C)+\fZ_K\subseteq \fZ_K\big\}. $$ For instance, since $\fZ_K$ always contains $(0,-\lambda\II)$ with $\lambda\leq 0$, the recession cone also contains $\{(0,-\lambda\II):\lambda\leq0\}$. \begin{proposition}\label{prop:rec_cone} The set $R_K$ defined at \eqref{eq:rec_cone_of_fZ} is contained in the following set \begin{equation}\label{eq:rec_cone_of_fZ_alt} T_K:=\bigcap_{y\in K}\left\{(x,C)\in \RM: -Cy-x\in S(K,y)\right\}, \end{equation} which is a convex closed cone in $\RM$. Furthermore, with probability one \begin{equation}\label{eq:rec_cone_claim_1} \check{T}_K\subset {\rm rec}(\fZ_K). \end{equation} Moreover, if $K$ is smooth, then $\check{T}_K={\rm rec}(\fZ_K)$, and, with $\HH$ satisfying \eqref{eq:H_condition}, \begin{equation}\label{eq:rec_cone_claim_2} {\rm rec}(\fZ_K\cap\fC_{\HH})=\check{T}_K\cap\fC_{\HH}. \end{equation} In particular, the limit $\check{\fZ}_K\cap\fC_{\HH}$ of $(n\,\widetilde{\log}\left(\XX_{K,\HH}(\Xi_n)\cap (\R^d\times \VV)\right))\cap\fC_{\HH}$ is a random compact set with probability one if and only if $\check{T}_K\cap\fC_{\HH}=\{(0,0)\}$. \end{proposition} \begin{proof} It is clear that $R_K\subseteq \cap_{y\in K} R_{K,y}$, where \begin{displaymath} R_{K,y}:=\bigcup_{k\geq 1}\bigcap_{n\geq k} \left\{(x,C)\in \RM: y\in \exp(C/n)(K+x/n)\right\}. \end{displaymath} A pair $(x,C)\in\RM$ lies in $R_{K,y}$ if and only if there exists a $k\in\NN$ such that $\exp(-C/n)y-x/n\in K$ for all $n\geq k$, equivalently, \begin{displaymath} n(\exp(-C/n)-\II)y-x\in n(K-y)\quad\text{for all}\quad n\geq k. \end{displaymath} Sending $n\to\infty$ and using that $\limsup_{n\to\infty}n(K-y)=S(K,y)$ show that $-Cy-x\in S(K,y)$, which means that $(x,C)\in T_{K,y}$. Thus, \begin{displaymath} R_{K,y}\subset \left\{(x,C)\in \RM: -Cy-x\in S(K,y)\right\}=:T_{K,y}, \end{displaymath} so that $R_K\subseteq T_K$. Since $T_{K,y}$ is a convex closed cone for all $y\in K$, the set $T_K$ is a convex closed cone as well. In order to check \eqref{eq:rec_cone_claim_1} note that $(N(K,y))^{o}=S(K,y)$. Hence, $-Cy-x\in S(K,y)$ if and only if $\langle Cy+x,u\rangle\geq 0$ for all $u\in N(K,y)$. Therefore, \begin{align*} T_K&=\bigcap_{y\in K}\bigcap_{u\in N(K,y)} \left\{(x,C)\in \RM: \langle Cy+x,u\rangle\geq 0\right\}\\ &=\bigcap_{(y,u)\in \Nor(K)}\left\{(x,C)\in \RM: \langle Cy+x,u\rangle\geq 0\right\}, \end{align*} where we have used that $N(K,y)=\{0\}$ if $y\in \Int K$. It follows from well-known results on recession cones, see p.~62 in \cite{Rockafellar:1972}, that \begin{align*} {\rm rec}(\fZ_{K}) &=\bigcap_{(t,\eta,u)\in\sP_K}\Big\{(x,C)\in\RM: \langle (x,C),(u,\eta\otimes u)\rangle_1\leq 0\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K}\Big\{(x,C)\in\RM: \langle C\eta+x,u\rangle\leq 0\Big\}. \end{align*} This immediately yields that $\check{T}_K\subseteq {\rm rec}(\fZ_{K})$. To see the converse inclusion for smooth $K$ note that the set \begin{displaymath} \{(\eta,u)\in \Nor(K):(t,\eta,u)\in \sP_K\text{ for some }t>0\} \end{displaymath} is a.s.~dense in $\Nor(K)=\{(x,u_K(x)):x\in\partial K\}$, where $u_K(x)$ is the unique unit outer normal to $K$ at $x$, see Lemma~4.2.2 and Theorem~4.5.1 in \cite{schn2}. Thus, with probability one, for every $(x,C)\in {\rm rec}(\fZ_{K})$ and $(y,u)\in \Nor(K)$ there exists a sequence $(\eta_n,u_n)$ such that $(\eta_n,u_n) \to (y,u)$, as $n\to\infty$, and $\langle C\eta_n+x,u_n\rangle\leq 0$. Thus, $\langle Cy+x,u\rangle\leq 0$ and $(x,C)\in \check{T}_K$. Finally, relation \eqref{eq:rec_cone_claim_2} follows from Corollary~8.3.3 in \cite{Rockafellar:1972} since \begin{displaymath} {\rm rec}(\fZ_k\cap \fC_{\HH}) ={\rm rec}(\fZ_k)\cap {\rm rec}(\fC_{\HH}) ={\rm rec}(\fZ_k)\cap \fC_{\HH}. \qedhere \end{displaymath} \end{proof} Further information on the properties of $\fZ_K$ is encoded in its polar set which takes the following rather simple form \begin{displaymath} \fZ_K^o=\conv \Big(\bigcup_{(t,\eta,u)\in\sP_K} [0,t^{-1}(u,(\eta\otimes u))]\Big), \end{displaymath} as easily follows from \eqref{eq:half-space_polar}. Since $\fZ_K$ a.s.~contains the origin in the interior, $\fZ_K^o$ is a.s. compact. Note that $\fZ_K^o$ is a subset of the Cartesian product of $\R^d$ and Gruber's normal bundle cone, see \cite{gruber2014normal}. The projection of $\fZ_K^o$ on the first factor $\R^d$ is a random polytope with probability one, which was recently studied in \cite{MarMol:2021}, see Section~5.1 therein. \subsection{Affine transformations of \texorpdfstring{$K$}{K}} Let us now derive various properties of $\fZ_K$ with respect to transformations of $K$. First of all, it is easy to see that $\fZ_{rK}$ coincides in distribution with $r^{-1}\fZ_K$, for every fixed $r>0$. Let $A\in \ortho$ be a fixed orthogonal matrix. Note that the point process $\sP_{AK}$ has the same distribution as the image of $\sP_K$ under the map $(t,\eta,u)\mapsto(t,A\eta,Au)$. Then, with $\od$ denoting equality of distributions, \begin{align*} \fZ_{AK}&\od \bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in\RM:\langle C A\eta+x,Au\rangle \leq t\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in\RM:\langle A^\top C A\eta+ A^\top x,u\rangle \leq t\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(Ay,ABA^{\top})\in\RM:\langle B\eta+ y,u\rangle \leq t\Big\}, \end{align*} so that $\fZ_{AK}$ has the same distribution as $\fZ_K$ transformed using the map $\mathcal{O}_A:(x,C)\to (Ax,ACA^{\top})$, which is orthogonal with respect to the inner product $\langle\cdot,\cdot\rangle_1$ in $\RM$. In particular, if $K$ is invariant under $A$, then the distribution of $\fZ_K$ is invariant under $\mathcal{O}_A$. Most importantly, if $K$ is a ball, then the distribution of $\fZ_K$ is invariant under $\mathcal{O}_A$ for any $A\in\ortho$. If $K$ is translated by $v\in\R^d$, then \begin{align*} \fZ_{K+v}&\od \bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in\RM:\langle C\eta,u\rangle +\langle Cv,u\rangle+\langle x,u\rangle\leq t\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x-Cv,C)\in\RM:\langle C\eta,u\rangle +\langle x,u\rangle\leq t\Big\}, \end{align*} meaning that $\fZ_{K+v}$ is the image of $\fZ_K$ under the linear operator $(x,C)\mapsto (x-Cv,C)$. \section{Examples}\label{sec:examples} Throughout this section $A=\Xi_n$ is a sample from the uniform distribution on $K$. If $\GG$ consists of the unit matrix and $\TT=\R^d$, then Theorem~\ref{thm:main1} turns into Theorem~5.1 of \cite{MarMol:2021}. Another object which has been recently treated in the literature is given in Example~\ref{ex:cones} below. Consider further examples involving nontrivial matrix groups. \begin{example}[General linear group] Let $\GG$ be the general linear group, so that $\fG$ is the family $\Matr$. If $\TT=\R^d$, Proposition~\ref{prop:largest_group} shows that $Q_n=\conv(\Xi_n)$ for every choice of $K\in \sK^d_{(0)}$. Assume that $\TT=\{0\}$ and let $K$ be the unit ball $B_1$. Then $Q_n$ is strictly larger than $\conv(\Xi_n)$ with probability $1$. Indeed, it is clear that $Q_n\supseteq \conv(\Xi_n)$, and the inclusion is strict because the set $Q_n$ is symmetric with respect to the origin, while the set $\conv(\Xi_n)$ is almost surely not. Choose $\fC_{\HH}=\{0\}\times\Matr$. Then \begin{displaymath} \fZ_{B_1}\cap (\{0\}\times\Matr) =\{0\}\times \bigcap_{(t,u)\in\sP} \big\{C\in\Matr : \langle Cu,u\rangle\leq t\big\}, \end{displaymath} where $\sP$ is the Poisson process on $\R_+\times\Sphere$ with intensity being the product of the Lebesgue measure multiplied by $d$ and the uniform probability measure on $\Sphere$. The factor $d$ results from taking the ratio of the surface area of the unit sphere and the volume of the unit ball. By Proposition \ref{prop:rec_cone}, since $S(B_1,y)=H^{-}_y(0)=\{x\in\R^d: \langle x,y\rangle\leq 0\}$ \begin{displaymath} {\rm rec}(\fZ_{B_1}\cap (\{0\}\times\Matr)) =\{0\}\times \bigcap_{y\in B_1}\big\{C\in\Matr : \langle Cy,y\rangle \leq 0 \big\}. \end{displaymath} Thus, \begin{displaymath} {\rm rec}\big (\check{\fZ}_{B_1}\cap (\{0\}\times\Matr) \big) =\{0\}\times \{C\in\Matr : C\text{ is positive semi-definite}\}. \end{displaymath} In particular, the second factor contains the subspace $\Matr[SSym]$ of all skew-symmetric matrices, as well as all real symmetric matrices with nonnegative eigenvalues. The former reflects the fact that $B_1$ is invariant with respect to $\ortho$ for which $\Matr[SSym]$ is the Lie algebra. The latter is a consequence of the fact that $\XX_{B_1,\gl}(\Xi_n)$ contains all scalings with scaling factors (possibly different along pairwise orthogonal directions) larger or equal than $1$. \end{example} \begin{example}[Special linear group] Let $\GG$ be the special linear group $\mathbb{SL}_d$, which consists of all $d\times d$ real-valued matrices with determinant one and assume again that $\TT=\{0\}$. The elements of the corresponding Lie algebra $\fG= \{C\in \Matr: \Tr C = 0\}$ are matrices with zero trace. Thus, we can set $\fC_{\HH}=\{0\}\times\fG$. If $K=B_1$, then \begin{displaymath} \fZ_{B_1}\cap (\{0\}\times\fG)=\{0\}\times \bigcap_{(t,u)\in\sP} \big\{C\in \fG:\langle C u,u\rangle \leq t\big\}. \end{displaymath} By Proposition~\ref{prop:rec_cone} \begin{align*} \mathfrak{R}:={\rm rec}\big (\fZ_{B_1}\cap (\{0\}\times\fG) \big) &=\{0\}\times \bigcap_{y\in B_1}\big\{C\in \fG:\langle Cy,y\rangle\leq 0 \big\}\\ &=\{0\}\times \bigcap_{y\in B_1}\big\{C\in \Matr: \Tr C=0, \langle Cy,y\rangle\leq 0 \big\}. \end{align*} The intersection of $\mathfrak{R}$ and $\check{\mathfrak{R}}$ is called the lineality space of $\fZ_{B_1}\cap (\{0\}\times\fG)$; it consists of all vectors that are parallel to a line contained in $\fZ_{B_1}\cap (\{0\}\times\fG)$, see p.~16 in~\cite{schn2}. Clearly, the lineality space of $\fZ_{B_1}\cap (\{0\}\times\fG)$ is a.s. equal to \begin{displaymath} \mathfrak{R}\cap \check{\mathfrak{R}} =\{0\}\times \bigcap_{y\in B_1}\big\{C\in \Matr: \Tr C=0, \langle Cy,y\rangle=0\big\}=\{0\}\times \Matr[SSym]. \end{displaymath} The vector space of square matrices $\Matr$ is the direct sum of the vector spaces of symmetric and skew-symmetric matrices: \begin{displaymath} \Matr=\Matr[Sym] \oplus \Matr[SSym]. \end{displaymath} Furthermore, with respect to the inner product $\langle A, B \rangle := \Tr (A B^{\top})$ this direct sum decomposition is orthogonal. Similarly, the space $\fG$ is a direct sum of two vector spaces $\Matr[SSym]$ and $\fG_+$, where $\fG_+ := \{C\in \Matr[Sym]: \Tr C = 0\}$. By Lemma~1.4.2 in \cite{schn2} we a.s.~have the orthogonal decomposition \begin{displaymath} \fZ_{B_1}\cap (\{0\}\times\fG) =\{0\}\times\left(\Matr[SSym] \oplus \big(\fZ_{B_1}\cap (\{0\}\times\fG)\big)_{+}\right), \end{displaymath} where \begin{align*} \big(\fZ_{B_1}\cap (\{0\}\times\fG) \big)_{+} := \bigcap_{(t,u)\in\sP} \big\{C\in \Matr[Sym] : \Tr C=0, \langle Cu,u\rangle\leq t \big\}. \end{align*} If a matrix $C\in \fG_+$ does not vanish, then at least one of its eigenvalues is strictly positive (because all eigenvalues are real by symmetry and their sum is $0$). If we denote by $v$ the corresponding unit eigenvector, then $\langle C v, v\rangle >0$. Since the set of $u_i$'s for which $(t_i,u_i)\in\sP$ is a.s.\ dense on the unit sphere in $\R^d$, it follows that $\langle C u_i, u_i\rangle >0$ for some $i$. Thus, $sC\notin \big(\fZ_{B_1}\cap (\{0\}\times\fG) \big)_{+}$ if $s>0$ is sufficiently large. Therefore, the convex set $\big(\fZ_{B_1}\cap (\{0\}\times\fG) \big)_{+}$ is a.s.~bounded, hence, is a compact subset of $\Matr[Sym]$. As in the previous example, the unbounded component $\{0\}\times \Matr[SSym]$ is present in $\fZ_{B_1}\cap (\{0\}\times\fG)$ due to the fact that $B_1$ is invariant with respect to the orthogonal group $\ortho$ which is a Lie subgroup of $\mathbb{SL}_d$. Since arbitrarily large scalings are not allowed in $\mathbb{SL}_d$, the random closed set $\fZ_{B_1}\cap (\{0\}\times\fG)$ is a.s.~bounded on the complement to $\{0\}\times \Matr[SSym]$. \end{example} \begin{example} Let $\GG=\ortho$ be the orthogonal group. As has already been mentioned, the corresponding Lie algebra $\fG=\Matr[SSym]$ is the $d(d-1)/2$-dimensional subspace of $\Matr$, consisting of all skew symmetric matrices. If $\TT=\R^d$, then $\fC_{\HH}=\R^d\times \fG$ and \begin{displaymath} \fZ_{K}\cap (\R^d\times \fG)=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,C)\in\R^d\times \Matr[SSym]: \langle C\eta,u\rangle +\langle x,u\rangle \leq t \Big\}. \end{displaymath} In the special case $d=2$ the Lie algebra $\fG$ is one-dimensional and is represented by the matrices \begin{displaymath} C= \begin{pmatrix} 0 & c\\ -c & 0 \end{pmatrix}, \quad c\in\R. \end{displaymath} Write $\eta:=(\eta',\eta'')$ and $u:=(u',u'')$. Then, with $\cong$ denoting the isomorphism of $\R^2\times \mathsf{M}_2^{\mathrm{SSym}}$ and $\R^2\times \R$, we can write \begin{align*} \fZ_{K}\cap \big(\R^2\times \mathsf{M}_2^{\mathrm{SSym}}\big) &\cong \bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,c)\in \R^2\times\R: c(\eta''u'-\eta'u'') +\langle x,u\rangle \leq t\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,c)\in \R^2\times\R: \langle x,u\rangle \leq t-c[u,\eta]\Big\}, \end{align*} where $[u,\eta]$ is the (signed) area of the parallelepiped spanned by $u$ and $\eta$. Therefore, $\fZ_{K}\cap \big (\R^2\times \mathsf{M}_2^{\mathrm{SSym}}\big)$ is (isomorphic to) the zero cell of a hyperplane tessellation $H_{(u,[u,\eta])}(t)$, $(t,\eta,u)\in\sP_K$ in $\R^2\times\R$. Let $K=[-1,1]^2$ be a square in $\R^2$. Then $\Theta_{d-1}(K,\cdot)$ is the sum of four terms, being products of the $1$-dimensional Hausdorff measures supported by the sides of $K$ and the Dirac measure at unit normal vectors $u_1,\dots,u_4$ to these sides. The push-forward of each of these four measures is the product of the Lebesgue measure on $[-1,1]$ and the Dirac measure at $u_1,\dots,u_4$. Hence, \begin{displaymath} \fZ_{K}\cap (\R^d\times \fG)\cong\bigcap_{i=1}^4 \bigcap_{(t,y)\in\sP_i} \Big\{(x,c)\in \R^2\times\R:c y+\langle x,u_i\rangle \leq t\Big\}, \end{displaymath} where $\sP_1,\dots,\sP_4$ are independent homogeneous Poisson processes on $\R_+\times [-1,1]$ of intensity $1/4$ (recall that the area of $K$ is $4$). In the special case $\TT=\{0\}$ we can set $x=0$, so that \begin{displaymath} \fZ_{K}\cap (\{0\} \times \mathsf{M}_2^{\mathrm{SSym}}) \cong\{0\}\times \bigcap_{i=1}^4 \bigcap_{(t,y)\in\sP_i} \big\{c\in\R:c y\leq t \big\}. \end{displaymath} An easy calculation (based on considering separately $\sP_i$ restricted on $[0,1]\times\R$ and $[-1,0]\times\R$) shows that the double intersection above is a segment $[-\zeta',\zeta'']$, where $\zeta'$ and $\zeta''$ are two exponentially distributed random variables of mean one. \end{example} \begin{example}[Scaling by constants] Let $\HH$ be the product of $\TT=\R^d$ and the family $\GG=\{e^{r}\II:r\in\R\}$ of scaling transformations, so that $\fC_{\HH}=\R^d\times \{r\II:r\in\R\}$. Then, with $\cong$ denoting the natural isomorphism between $\fC_{\HH}$ and $\R^d\times \R$, \begin{align*} \fZ_{K}\cap \fC_{\HH} &\cong\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,r)\in\R^{d}\times\R: r\langle \eta,u\rangle + \langle x,u\rangle \leq t\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,r)\in\R^{d}\times\R: rh(K,u) + \langle x,u\rangle \leq t\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,r)\in\R^{d}\times\R: h(rK+x,t^{-1}u)\leq 1\Big\}\\ &=\bigcap_{(t,\eta,u)\in\sP_K} \Big\{(x,r)\in\R^{d}\times\R: t^{-1}u\in (rK+x)^{o}\Big\}\\ &=\Big\{(x,r)\in\R^{d}\times\R: \conv (\{t^{-1}u: (t,\eta,u)\in\sP_K\})\subseteq (rK+x)^{o}\Big\}. \end{align*} The set $\conv (\{t^{-1}u: (t,\eta,u)\in\sP_K\})$ has been studied in \cite{MarMol:2021}, in particular, the polar set to this hull is the zero cell $Z_K$ of the Poisson hyperplane tessellation in $\R^d$, whose intensity measure is the product of the Lebesgue measure (scaled by $V_d^{-1}(K)$) and the surface area measure $S_{d-1}(K,\cdot)=\Theta_{d-1}(K,\R^d\times\cdot)$ of $K$. Thus, we can write \begin{equation}\label{eq:scalings_translatins} \fZ_{K}\cap \fC_{\HH}\cong \{(x,r)\in\R^{d}\times\R: rK+x\subseteq Z_K\}. \end{equation} If $K$ is the unit Euclidean ball $B_1$, then \eqref{eq:scalings_translatins} can be recast as \begin{displaymath} \fZ_{B_1}\cap \fC_{\HH}\cong \Big\{(x,r)\in \R^d\times\R: x\in \cap_{i\geq1} H^-_{u_i}(t_i-r)\Big\}, \end{displaymath} where $\{H_{u_i}(t_i),i\geq1\}$ is a stationary Poisson hyperplane tessellation in $\R^d$. For every $r_0\geq 0$, the section of $\fZ_{B_1}\cap \fC_{\HH}$ by the hyperplane $\{r=r_0\}$ is the set $\{x\in\R^d: x+B_{r_0} \subset Z\}$, where $Z$ is the zero cell of the aforementioned tessellation. If $r_0<0$, then the section by $\{r=r_0\}$ is the Minkowski sum $Z+B_{-r_0}$. The set $\fZ_{B_1}\cap \fC_{\HH}$ can also be considered as the zero cell of the Poisson hyperplane tessellation in $\R^{d+1}$ with the directional measure being the product of the surface area measure of the unit ball (scaled by its volume) and the Dirac measure $\delta_{-1}$. \end{example} \begin{example}[Diagonal matrices] Let $\GG$ be the group of diagonal matrices with positive entries given by $\diag(e^z)$ for $z\in\R^d$. If $\TT=\R^d$, then $\fC_{\HH} \cong \R^d\times\R^d$. If $K$ is the unit ball, we get \begin{displaymath} \fZ_{B_1}\cap\fC_{\R^d\times \GG}\cong\Big\{(x,z)\in \R^{d}\times\R^d: x\in \bigcap_{i\geq1} H^-_{u_i}\Big(t_i-\sum_{j=1}^d z_ju_{i,j}^2\Big)\Big\}. \end{displaymath} If $\TT=\{0\}$ and $K$ is arbitrary, then \begin{align*} \fZ_{K}\cap\fC_{\R^d\times \GG} &=\{0\}\times\bigcap_{(t,\eta,u)\in\sP_K} \big\{z\in \R^{d}: \langle \diag(z)\eta,u\rangle \leq t\big\}\\ &=\{0\}\times \bigcap_{(t,\eta,u)\in\sP_K} \big\{z\in \R^{d}: \langle z,(\diag(\eta)u)\rangle \leq t\big\}, \end{align*} where $\diag(\eta) u$ is the vector given by componentwise products of $\eta$ and $u$. Thus, the above intersection is the zero cell of a Poisson tessellation in $\R^d$ whose directional measure is obtained as the push-forward of $\Theta_{d-1}(K,\cdot)$ under the map $(x,u)\mapsto \diag(x)u$. If $K$ is the unit ball, then this directional measure is the push-forward of the uniform distribution on the unit sphere under the map which transforms $x$ to the vector composed of the squares of its components. By Proposition~\ref{prop:rec_cone} \begin{displaymath} {\rm rec}(\fZ_{B_1}\cap\fC_{\{0\}\times \GG}) \cong \{0\}\times \bigcap_{u\in B_1} \big\{z\in\R^d:\langle z,\diag(u)u\rangle\leq 0 \big\} =\{0\}\times (-\infty,0]^d. \end{displaymath} \end{example} \begin{example}[Random cones and spherical polytopes]\label{ex:cones} Assume that $K$ is the closed unit upper half-ball $B_1^{+}$ defined at \eqref{eq:upper-ball}, $\TT=\{0\}$ and $\GG=\mathbb{SO}_d$, so that $\fC_{\HH}=\{0\}\times \Matr[SSym]$. If $\Xi_n$ is a sample from the uniform distribution in $K$, then $\pi(\Xi_n)$ is a sample from the uniform distribution on the half-sphere $\mathbb{S}^{d-1}_{+}$, where $\pi(x)=x/\|x\|$ for $x\neq 0$. Indeed, for a Borel set $A\subset \SS^{d-1}_{+}$, \begin{displaymath} \P\{\pi(\xi_1)\in A\}=\P\{\xi_1/\|\xi_1\|\in A\} =\P\{\xi_1\in {\rm pos}(A)\} =\frac{V_d({\rm pos}(A)\cap B_1^{+})}{V_d(B_1^{+})} =\frac{\mathcal{H}_{d-1}(A)}{\mathcal{H}_{d-1}(\SS^{d-1}_{+})}, \end{displaymath} where $\mathcal{H}_{d-1}$ is the $(d-1)$-dimensional Hausdorff measure. According to \eqref{eq:cones2}, \begin{displaymath} Q_n\cap \Sphere=\conv_{K,\HH}(\Xi_n)\cap\Sphere = {\rm pos}(\Xi_n) \cap \Sphere ={\rm pos}(\pi(\Xi_n))\cap \Sphere \end{displaymath} is a closed random spherical polytope obtained as the spherical hull of $n$ independent points, uniformly distributed on $\SS^{d-1}_{+}$. This object has been intensively studied in \cite{kab:mar:tem:19}. Let $T_n:\R^d\to\R^d$ be a linear mapping \begin{displaymath} T_n(x_1,x_2,\ldots,x_d)=(n x_1,x_2,\ldots,x_d),\quad n\in\NN. \end{displaymath} Theorem~2.1 in \cite{kab:mar:tem:19} implies that the sequence of random closed cones $(T_n({\rm pos}(\Xi_n)))_{n\in\NN}$ converges in distribution in the space of closed subsets of $\R^d$ endowed with the Fell topology to a closed random cone whose intersection with affine hyperplane $\{(x_1,x_2,\dots,x_d)\in\R^d:x_1=1\}$ is the convex set $(\conv(\widetilde{\sP}),1)$, where $\widetilde{\sP}$ is a Poisson point process on $\R^{d-1}$ with the intensity measure \begin{equation} \label{eq:poisson_intensity_sphere_d-1} x\mapsto c_d\|x\|^{-d},\quad x\in \R^{d-1}\setminus \{0\}, \end{equation} with an explicit positive constant $c_d$. The following arguments show that it is possible to establish an isomorphism between the positive dual cone $\{x\in\R^d:\langle x,\xi_k\rangle\geq 0,k=1,\dots,n\}$ to the cone ${\rm pos}(\Xi_n)$ and the set $\XX_{K,\HH}(\Xi_n)$ defined at \eqref{eq:4}, so that our limit theorem yields the limit for this normalised dual cone. Denote by $e_1,\dots,e_d$ standard basis vectors. Since $\langle C\eta,u\rangle=0$ for all $(t,\eta,u)\in\sP_K$ with $u\in \SS^{d-1}_{+}$ and $C\in\Matr[SSym]$, we need only to consider $(t,\eta,u)\in\sP_K$ such that $u=-e_1$, meaning that $\eta$ lies on the flat boundary part of $B_1^+$, so that \begin{align*} \fZ_{B_1^{+}}\cap(\{0\}\times \Matr[SSym]) &=\{0\}\times \bigcap_{(t,\eta,u)\in\sP_{B_1^{+}}} \Big\{C\in\Matr[SSym]: \langle C\eta,u\rangle \leq t\Big\}\\ &=\{0\}\times\bigcap_{(t,\eta,-e_1)\in\sP_{B_1^{+}}} \Big\{C\in\Matr[SSym]: \langle C\eta,-e_1\rangle \leq t\Big\}\\ &=\{0\}\times\bigcap_{(t,\eta,-e_1)\in\sP_{B_1^{+}}} \Big\{C\in\Matr[SSym]: \langle \eta,Ce_1\rangle \leq t\Big\}. \end{align*} Note that every skew-symmetric matrix can be uniquely decomposed into a sum of a skew-sym\-metric matrix with zeros in the first row and the first column and a skew symmetric matrix with zeros everywhere except the first row and the first column. This corresponds to the direct sum decomposition of the space of skew symmetric matrices $\Matr[SSym]:=V_1\oplus V_2$, where $V_1\cong \sM^{\mathrm{SSym}}_{d-1}$. For every $(t,\eta,-e_1)\in\sP_{B_1^{+}}$ and $C\in V_1$ we obviously have $\langle \eta,Ce_1\rangle=0$. Thus, \begin{displaymath} \fZ_{B_1^{+}}\cap \big(\{0\}\times \Matr[SSym]\big) \cong \{0\}\times\Big(\sM^{\mathrm{SSym}}_{d-1} \oplus \bigcap_{(t,\eta,-e_1)\in\sP_{B_1^{+}}} \big\{C\in V_2: \langle \eta,Ce_1\rangle \leq t\big\}\Big). \end{displaymath} The fact that $\fZ_{B_1^{+}}\cap(\{0\}\times \Matr[SSym])$ contains the subspace $V_1$ has the following interpretation. It is known that the exponential map from $\Matr[SSym]$ to $\mathbb{SO}_d$ is surjective, that is, every orthogonal matrix with determinant one can be represented as the exponent of a skew-symmetric matrix, see Corollary 11.10 in \cite{Hall:2003}. The image $\exp(V_1)$ is precisely the set of orthogonal matrices with determinant one and for which $e_1$ is a fixed point. This set is a subgroup of $\mathbb{SO}_d$ which is isomorphic to $\mathbb{SO}_{d-1}$, and $B_1^{+}$ is invariant with respect to all transformations from $\exp(V_1)$. The set $\exp(V_2)$ is not a subgroup of $\mathbb{SO}_d$ but is a smooth manifold of dimension $d-1$. Note that the above construction is the particular case of the well-known general concept of quotient manifolds in Lie groups, see Chapter 11.4 in \cite{Hall:2003}. There is a natural isomorphism $\phi:\{0\}\times V_2\to \R^{d-1}$ which sends $(0,C)\in \{0\}\times V_2$ to the vector $\phi(C)\in\R^{d-1}$ which is the first column of $C$ with the first component (it is always zero) deleted. Moreover, if $(t,\eta,-e_1)\in\sP_{B_1^{+}}$, then $\eta$ is necessarily of the form $\eta=(0,\eta')$, where $\eta'\in B'_1$ and $B'_1$ is a $(d-1)$-dimensional centred unit ball. It can be checked that the Poisson process $\{(t^{-1}\eta')\in \R^{d-1}\setminus\{0\} : (t,\eta,-e_1)\in\sP_{B_1^{+}}\}$ has intensity~\eqref{eq:poisson_intensity_sphere_d-1}. Summarising, \begin{displaymath} \phi \big(\fZ_{B_1^{+}}\cap(\{0\}\times V_2) \big) =\bigcap_{(t,\eta,-e_1)\in\sP_{B_1^{+}}}\big\{x\in \R^{d-1}: \langle \eta',x\rangle \leq t\big\}=:\widetilde{Z}_0, \end{displaymath} is the zero cell of the Poisson hyperplane tessellation $\{H^{-}_{\eta'}(t):(t,\eta,-e_1)\in\sP_{B_1^{+}}\}$ of $\R^{d-1}$. Remarkably, the polar set to $\widetilde{Z}_0$ is the convex hull of $\widetilde{\sP}$. \end{example} \section{Appendix} \label{sec:appendix} The subsequent presentation concerns random sets in Euclidean space $\R^d$ of generic dimension $d$. These results are applied in the main part of this paper to random sets of affine transformations, which are subsets of the space $\R^d\times\Matr$. This latter space can be considered an Euclidean space of dimension $d+d^2$. Let $\sF^d$ be the family of closed sets in $\R^d$. Denote by $\sC^d$ the family of (nonempty) compact sets and by $\sK^d$ the family of convex compact sets. The family of convex compact sets containing the origin is denoted by $\sK^d_0$, while $\sK^d_{(0)}$ is the family of convex compact sets which contain the origin in their interiors. Each set from $\sK^d_{(0)}$ is a convex body (a convex compact set with nonempty interior). The family $\sF^d$ is endowed with the Fell topology, whose base consists of finite intersections of the sets $\{F:F\cap G\neq\emptyset\}$ and $\{F:F\cap L=\emptyset\}$ for all open $G$ and compact $L$. The definition of the Fell topology and its basic properties can be found in Section~12.2 of \cite{sch:weil08} or Appendix~C in \cite{mol15}. It is well known that $F_n\to F$ in the Fell topology (this will be denoted by $F_n\toFn F$) if and only if $F_n$ converges to $F$ in the Painlev\'e--Kuratowski sense, that is, $\limsup F_n=\liminf F_n=F$. Recall that $\limsup F_n$ is the set of all limits of convergent subsequences $x_{n_k}\in F_{n_k}$, $k\geq1$, and $\liminf F_n$ is the set of limits of convergent sequences $x_n\in F_n$, $n\geq1$. The space $\sF^d$ is compact in the Fell topology, see Theorem 12.2.1 in \cite{sch:weil08}. The family $\sC^d$ is endowed with the topology generated by the Hausdorff metric which we denote by $d_{\rm H}$. The topology induced by $d_{\rm H}$ on $\sK^d$ is exactly the Painlev\'e--Kuratowski topology, that is, the topology induced on $\sK^d\subset\sF^d$ by the Fell topology on $\sF^d$, see Theorem~1.8.8 in~\cite{schn2}. In comparison, the topology induced by $d_{\rm H}$ on $\sC^d$ is strictly finer, then the topology induced on $\sC^d$ by the Fell topology, see Theorem~12.3.2 in \cite{sch:weil08}. It is easy to see that the convergence $(F_n\cap L)\toHn (F\cap L)$, as $n\to\infty$, for each compact set $L$ implies the Fell convergence $F_n\toFn F$. The inverse implication is false in general, since the intersection operation is not continuous, see Theorem~12.2.6 in~\cite{sch:weil08}. The following result establishes a kind of continuity property for the intersection map. A closed set $F$ is said to be {\it regular closed} if it coincides with the closure of its interior. The empty set is also considered regular closed. A nonempty convex closed set is regular closed if and only if its interior is not empty. \begin{lemma} \label{lemma:int-det} Let $(F_n)_{n\in\NN}$ and $F$ be closed sets such that $F_n\toFn F$, $n\to\infty$, and let $L$ be a closed set in $\R^d$. Assume that one of the following conditions hold: \begin{enumerate}[(i)] \item $F\cap L$ is regular closed; \item the sets $F$ and $L$ are convex, $0\in\Int F$ and $0\in L$. \end{enumerate} Then $(F_n\cap L)\toFn (F\cap L)$, as $n\to\infty$. \end{lemma} \begin{proof} By Theorem~12.2.6(a) in~\cite{sch:weil08}, we have \begin{displaymath} \limsup(F_n\cap L)\subset (F\cap L). \end{displaymath} If $F$ is empty, this finishes the proof. Otherwise, it suffices to show that $(F\cap L)\subset \liminf (F_n\cap L)$, assuming that $F$ is not empty, so that $F_n$ is also nonempty for all sufficiently large $n$. (i) For every $x\in\Int(F\cap L)$, there exists a sequence $x_n\in F_n$, $n\geq1$, such that $x_n\to x$ and $x_n\in L$ for all sufficiently large $n$. Thus, $\Int(F\cap L)\subset \liminf(F_n\cap L)$ and therefore $$ F\cap L= \cl (\Int (F\cap L))\subset \liminf(F_n\cap L), $$ where for the equality we have used that $F\cap L$ is regular closed, and for the inclusion that the lower limit is always a closed set. (ii) First of all, note that \begin{equation}\label{lem:intersection_proof1} \cl ((\Int F)\cap L)=F\cap L. \end{equation} Indeed, if $x\in (F\cap L)\setminus \{0\}$, then convexity of $F\cap L$ and $0\in F\cap L$ imply that $x_n:=(1-\frac{1}{n})x\in (\Int F)\cap L$, for all $n\in\NN$. Since $x_n\to x$, we obtain $x\in \cl ((\Int F)\cap L)$. Obviously, $\{0\}\in\cl ((\Int F)\cap L)$. Thus, $F\cap L \subseteq \cl ((\Int F)\cap L)$. The inverse inclusion holds trivially. Taking into account~\eqref{lem:intersection_proof1} and that $\liminf$ is a closed set it suffices to show that \begin{equation}\label{lem:intersection_proof2} (\Int F)\cap L\subset \liminf(F_n\cap L). \end{equation} Assume that $x\in (\Int F)\cap L$. Pick a small enough $\eps>0$ and a sufficiently large $R>0$ such that $x+B_\eps\subset F\cap B_R$. Since $F\cap B_R$ is convex and contains the origin in the interior, it is regular closed. Thus, by part (i), $F_n\cap B_R\toFn F\cap B_R$. By Theorem 12.3.2 in \cite{sch:weil08}, we also have $F_n\cap B_R\toHn F\cap B_R$. In particular, there exists $n_0\in\NN$ such that $F\cap B_R\subset (F_n\cap B_R)+B_{\eps/2}$, for $n\geq n_0$, and, thereupon, $x+B_{\eps/2}\subset F_n$. Hence $x\in F_n\cap L$ for all $n\geq n_0$. Thus, \eqref{lem:intersection_proof2} holds. \end{proof} The following result establishes continuity properties of the polar transform $L\mapsto L^o$ defined by \eqref{eq:polar} on various subfamilies of convex closed sets which contain the origin. It follows from Theorem~4.2 in \cite{mol15} that the polar map $L\mapsto L^o$ is continuous on $\sK^d_{(0)}$ in the Hausdorff metric, equivalently, in the Fell topology. While $L^o$ is compact if $L$ contains the origin in its interior, $L^o$ is not necessarily bounded for $L\in\sK^d_0\setminus \sK^d_{(0)}$. Recall that $\dom(L)$ denotes the set of $u\in\R^d$ such that $h(L,u)<\infty$. \begin{lemma} \label{lemma:polar-map} Let $L$ and $L_n$, $n\in\NN$, be convex closed sets which contain the origin. \begin{enumerate}[(i)] \item Assume that $\dom(L_n)=\dom(L)$ is closed for all $n\in\NN$, and $h(L_n,u)\to h(L,u)$, as $n\to\infty$, uniformly over $u\in \dom(L)\cap\Sphere$. Then $L_n^o\to L^o$ in the Fell topology. \item The polar transform is continuous as a map from $\sK^d_0$ with the Hausdorff metric to $\sF^d$ with the Fell topology. \item The polar transform is continuous as the map from the family of convex closed sets which contain the origin in their interior with the Fell topology to $\sK^d_0$ with the Hausdorff metric. \item The polar transform is continuous as the map from $\sK^d_{(0)}$ to $\sK^d_{(0)}$, where both spaces are equipped with the Hausdorff metric. \end{enumerate} \end{lemma} \begin{proof} (i) Consider a sequence $(x_{n_k})_{k\in\NN}$ such that $x_{n_k}\in L^o_{n_k}$, $k\in\NN$, and $x_{n_k}\to x$, as $k\to\infty$. Assume that $x\notin L^o$. If $h(L,x)=\infty$, that is $x\in (\dom(L))^c$, then also $x_{n_k} \in (\dom(L))^c$ for all sufficiently large $k$, since the complement to $\dom(L)$ is open. Hence, $x_{n_k} \in (\dom(L_{n_k}))^c$ and $h(L_{n_k},x_{n_k})=\infty$, meaning that $x_{n_k}\notin L_{n_k}^o$. Assume now that $h(L,x)<\infty$ and $h(L_{n_k},x_{n_k})<\infty$ for all $k$. If $u,v\in\dom(L)\cap\Sphere$, then $h(L,u)=\langle x,u\rangle$ for some $x\in L$, so that \begin{displaymath} h(L,u)=\langle x,u-v\rangle +\langle x,v\rangle \leq \|x\|\|u-v\|+h(L,v). \end{displaymath} Hence, the support function of $L$ is Lipschitz on $\dom(L)\cap\Sphere$ with the Lipschitz constant at most $c_L:=\sup_{u\in\dom(L)\cap\Sphere} h(L,u)<\infty$. Since we assume $x\notin L^o$, we have $h(L,x)\geq 1+\eps$ for some $\eps>0$. The uniform convergence assumption yields that, for all sufficiently large $k$, \begin{displaymath} h(L_{n_k},x_{n_k})\geq h(L,x_{n_k})-\eps/4 \geq h(L,x)-\eps/4-c_L\|x_{n_k}-x\|\geq 1+\eps/2 \end{displaymath} for all sufficiently large $k$, meaning that $x_{n_k}\notin L^o_{n_k}$, which is a contradiction. Hence, $\limsup L^o_n\subset L^o$. Let $x\in L^o$. Then $h(L,x)\leq1$, so that $h(L_n,x)\leq 1+\eps_n$, where $\eps_n\downarrow 0$ as $n\to\infty$. Letting $x_n:=x/(1+\eps_n)$, we have that $x_n\in L^o_n$ and $x_n\to x$. Thus, $L^o\subset \liminf L^o_n$. (ii) If all sets $(L_n)$ and $L$ are compact, then $\dom(L)=\R^d$, the convergence in the Hausdorff metric is equivalent to the uniform convergence of support functions on $\Sphere$, see Lemma 1.8.14 in~\cite{schn2}. Thus, $L_n^o\toFn L^o$ by part (i). (iii) Assume that $L_n\toFn L$. In view of Lemma~\ref{lemma:int-det}(i), $L_n\cap B_R\toFn L\cap B_R$, for every fixed $R>0$, and, therefore, $L_n\cap B_R\toHn L\cap B_R$ by Theorem 1.8.8 in~\cite{schn2}. Fix a sufficiently small $\eps>0$ such that $B_\eps\subset L$. Then $B_{\eps/2}\subset L_n$, for all sufficiently large $n$. By part (ii), $(L_n\cap B_R)^o\toFn (L\cap B_R)^o$. Since $(L_n\cap B_R)^o\subseteq B_{(\eps/2)^{-1}}$, for all sufficiently large $n$, $(L_n\cap B_R)^o\toHn (L\cap B_R)^o$, again by Theorem~1.8.8 in~\cite{schn2}. Finally, note that \begin{align*} d_H(L^{o}_n,L^{o}) &\leq d_H(L^{o}_n,(L_n\cap B_R)^{o}) +d_H((L_n\cap B_R)^{o},(L\cap B_R)^{o})+ d_H((L\cap B_R)^{o},L^{o})\\ &= d_H(L^{o}_n,\conv (L_n^{o}\cup B_{R^{-1}})) +d_H((L_n\cap B_R)^{o},(L\cap B_R)^{o})+ d_H(\conv (L^{o}\cup B_{R^{-1}}),L^{o})\\ &\leq R^{-1}+d_H((L_n\cap B_R)^{o},(L\cap B_R)^{o})+R^{-1}, \end{align*} where we have used that $(A_1\cap A_2)^{o}=\conv (A_1^{o}\cup A_2^{o})$ and $B_R^{o}=B_{R^{-1}}$. (iv) Follows from (iii), since the Fell topology on $\sK^d_{(0)}$ coincides with the topology induced by the Hausdorff metric. \end{proof} A random closed set $X$ is a measurable map from a probability space to $\sF^d$ endowed with the Borel $\sigma$-algebra generated by the Fell topology. This is equivalent to the assumption that $\{X\cap L\neq\emptyset\}$ is a measurable event for all compact sets $L$. The distribution of $X$ is uniquely determined by its capacity functional \begin{displaymath} T_X(L)=\Prob{X\cap L\neq\emptyset},\quad L\in\sC^d. \end{displaymath} A sequence $(X_n)_{n\in\NN}$ of random closed sets in $\R^d$ converges in distribution to a random closed set $X$ (notation $X_n\dodn X$) if the corresponding probability measures on $\sF^d$ (with the Fell topology) weakly converge. By Theorem~1.7.7 in~\cite{mo1}, this is equivalent to the pointwise convergence of capacity functionals \begin{equation} \label{eq:3} T_{X_n}(L)\to T_X(L) \quad \text{as}\; n\to\infty \end{equation} for all $L\in\sC^d$ which satisfy \begin{equation} \label{eq:15} \Prob{X\cap L\neq\emptyset}=\Prob{X\cap \Int L\neq\emptyset}, \end{equation} that is, $T_X(L)=T_X(\Int L)$. The latter condition means that the family $\{F\in\sF^d:F\cap L\neq\emptyset\}$ is a continuity set for the distribution of $X$, and we also say that $L$ itself is a continuity set. It suffices to impose \eqref{eq:3} for sets $L$ which are regular closed or which are finite unions of balls of positive radii; these families constitute so called convergence determining classes, see Corollary~1.7.14 in~\cite{mo1}. \begin{lemma} \label{lemma:restriction} A sequence of random closed sets $(X_n)_{n\in\NN}$ in $\R^d$ converges in distribution to a random closed set $X$ if there exists a sequence $(L_m)_{m\in\NN}$ of compact sets such that $\Int L_m\uparrow\R^d$ and $(X_n\cap L_m)\dodn (X\cap L_m)$ as $n\to\infty$ for each $m\in\NN$. \end{lemma} \begin{proof} We will check \eqref{eq:3}. Fix an $L\in\sC^d$ such that $T_X(L)=T_X(\Int L)$. Pick $m\in\NN$ so large that $L_m$ contains $L$ in its interior. We have that \begin{displaymath} T_{X\cap L_m}(L)=\Prob{X\cap L_m\cap L\neq\emptyset} =\Prob{X\cap L_m\cap \Int L\neq\emptyset}=T_{X\cap L_m}(\Int L). \end{displaymath} Since $(X_n\cap L_m)\dodn (X\cap L_m)$, as $n\to\infty$, we have that \begin{multline*} T_{X_n}(L)=\Prob{X_n \cap L\neq\emptyset} =\Prob{X_n\cap L_m \cap L\neq\emptyset}\\ \to\Prob{X\cap L_m\cap L\neq\emptyset}=\Prob{X\cap L\neq\emptyset}=T_X(L), \end{multline*} meaning that $X_n\dodn X$. \end{proof} For a random closed set $X$, the functional \begin{displaymath} I_X(L)=\Prob{L\subset X}, \quad L\in\sB(\R^d), \end{displaymath} is called the inclusion functional of $X$. While the capacity functional determines uniquely the distribution of $X$, this is not the case for the inclusion functional, e.g., if $X$ is a singleton with a nonatomic distribution. Let $\sE$ be the family of all convex regular closed subsets of $\R^d$ (including the empty set), and let $\sE'$ denote the family of closed complements to all sets from $\sE$. The whole space also belongs to the family $\sE$. Recall that a nonempty convex closed set is regular closed if and only if its interior is not empty. \begin{lemma} \label{lemma:bicont} The map $F\mapsto \cl(F^c)$ is a bicontinuous (in the Fell topology) bijection between $\sE$ and $\sE'$. \end{lemma} \begin{proof} The map $F\mapsto\cl(F^c)$ is self-inverse on $\sE$, hence a bijection. Let us prove continuity. Assume that $F_n\toFn F$. Let $\cl(F^c)\cap G\neq\emptyset$ for an open set $G$. Then $F\neq \R^d$ and $F^c\cap G\neq\emptyset$, meaning that $G$ is not a subset of $F$. Take $x\in G\setminus F$. There exists an $\eps>0$ such that $x+B_\eps\subset G$ and $(x+B_\eps)\cap F=\emptyset$. Since $F_n$ converges to $F$, we have that $(x+B_\eps)\cap F_n=\emptyset$, for sufficiently large $n$. Thus, $F_n^c\cap G\neq\emptyset$, which means that $\cl(F_n^c)\cap G\neq\emptyset$, for sufficiently large $n$. This argument also applies if $F_n$ converges to the empty set. Suppose that $\cl(F^c)\cap L=\emptyset$ for a compact nonempty set $L$ (in this case $F$ is necessarily nonempty). By compactness, $\cl(F^c)\cap (L+B_\eps)=\varnothing$, for a sufficiently small $\eps>0$. Therefore, \begin{equation}\label{eq:bicont_lemma_proof1} L+B_\eps\subseteq (\cl(F^c))^c=\Int F. \end{equation} By convexity of $F$, it is possible to replace $L$ with its convex hull, so assume that $L$ is convex. Pick a large $R>0$ such that $L+B_\eps\subseteq \Int B_R$. From Lemma~\ref{lemma:int-det}(i) and using the same reasoning as in the proof of part~(iii) of Lemma~\ref{lemma:polar-map} we conclude that $$ F_n\cap B_R \toHn F\cap B_R,\quad n\to\infty. $$ Thus, $(F\cap B_R)\subset (F_n\cap B_R)+B_{\eps/2}$ for all sufficiently large $n$. In conjunction with~\eqref{eq:bicont_lemma_proof1} this yields $L+B_\eps\subset (F_n\cap B_R)+B_{\eps/2}$ for sufficiently large $n$. Since $L$ and $F_n\cap B_R$ are convex, we conclude that $L\subset \Int (F_n\cap B_R)\subset \Int F_n$. Hence, $L\cap \cl(F_n^c)=\emptyset$ for all sufficiently large $n$. This observation completes the proof of continuity of the direct mapping. It remains to prove continuity of the inverse mapping. Assume that $\cl(F_n^c)\toFn \cl(F^c)$, as $n\to\infty$, with $F_n,F\in\sE$. If $F\cap G\neq\emptyset$ for an open set $G$, then $(\Int F)\cap G\neq \emptyset$ and also $\cl(F^c)\neq\R^d$. Take a point $x\in (\Int F)\cap G$. Then $x\notin \cl(F^c)$, so that $x\notin \cl(F_n^c)$, for all sufficiently large $n$, meaning that $x\in F_n$ and $G\cap F_n\neq\emptyset$, for all sufficiently large $n$. Now assume that $F\cap L=\emptyset$ for a nonempty compact set $L$. We aim to show that $F_n\cap L=\emptyset$, for all sufficiently large $n$ (note that $F=\emptyset$ is allowed). By compactness of $L$, for sufficiently small $\eps>0$, we have $F\cap(L+B_\eps)=\emptyset$. Thus, $$ L+B_\eps\subseteq F^c\subseteq \cl(F^c). $$ Pick again a sufficiently large $R>0$ such that $L+B_\eps\subseteq B_R$. By Lemma~\ref{lemma:int-det}(i) $(\cl(F_n^c)\cap B_R)\toHn (\cl(F^c)\cap B_R)$, and therefore $(\cl(F^c)\cap B_R)\subset (\cl(F_n^c)\cap B_R)+B_{\eps/2}$ for all sufficiently large $n$. Thus, $L+B_\eps\subset (\cl(F_n^c)\cap B_R)+B_{\eps/2}$, which implies $L\subseteq F_n^c$. Therefore, $L\cap F_n=\emptyset$ and the proof is complete. \end{proof} The following result establishes the convergence in distribution of random convex closed sets with values in $\sE$ from the convergence of their inclusion functionals. It provides an alternative proof and an extension of Proposition~1.8.16 in~\cite{mo1}, which establishes this fact for random sets with values in $\sK_{(0)}^d$. \begin{theorem} \label{thm:inclusion} Let $X$ and $X_n$, $n\in\NN$ be random closed sets in $\R^d$ which almost surely take values from the family $\sE$ of regular convex closed sets (including the empty set). If \begin{equation}\label{eq:22} \Prob{L\subset X_n}\to \Prob{L\subset X}\quad \text{as}\quad n\to\infty \end{equation} for all regular closed $L\in\sK^d$ such that \begin{equation} \label{eq:2} \Prob{L\subset X}=\Prob{L\subset \Int X}, \end{equation} then $X_n\dodn X$, as $n\to\infty$. \end{theorem} \begin{proof} In view of Lemma~\ref{lemma:bicont} it suffices to prove that $\cl(X_n^c)\dodn \cl(X^c)$, as $n\to\infty$. Furthermore, since regular closed compact sets constitute a convergence determining class, see Corollary 1.7.14 in \cite{mo1}, it suffices to check that \begin{equation}\label{eq:thm_inclusion_proof1} \Prob{\cl(X_n^c)\cap L\neq\emptyset}\to \Prob{\cl(X^c)\cap L\neq\emptyset} \quad\text{as}\quad n\to\infty, \end{equation} for all regular closed $L\in\sC^d$, which are continuity sets for $\cl(X^c)$. The latter means that \begin{equation}\label{eq:thm_inclusion_proof2} \Prob{\cl(X^c)\cap L=\emptyset}=\Prob{\cl(X^c)\cap \Int L=\emptyset}. \end{equation} Fix a regular closed set $L\in\sC^d$ such that~\eqref{eq:thm_inclusion_proof2} holds. Since \begin{displaymath} \Prob{\cl(X^c)\cap L=\emptyset}=\Prob{L\subset \Int X}\quad\text{and}\quad \Prob{\cl(X^c)\cap \Int L=\emptyset}=\Prob{\Int L\subset \Int X}, \end{displaymath} we conclude that $$ \Prob{L\subset X}\leq \Prob{\Int L\subset \Int X} =\Prob{L\subset \Int X}\leq\Prob{L\subset X}. $$ Thus, $L$ satisfies \eqref{eq:2}. Let $(\eps_k)_{k\in\NN}$ be a sequence of positive numbers such that $\eps_k\downarrow 0$, as $k\to\infty$, and \begin{displaymath} \Prob{L+B_{\eps_k}\subset X}=\Prob{L+B_{\eps_k}\subset \Int X},\quad k\in\NN. \end{displaymath} Sending $n\to\infty$ in the chain of inequalities $$ \Prob{L+B_{\eps_k} \subset X_n}\leq \Prob{L\subseteq \Int X_n} =\Prob{\cl(X_n^c)\cap L=\emptyset} \leq \Prob{L\subset X_n}, $$ and using~\eqref{eq:22}, we conclude that \begin{equation}\label{eq:thm_inclusion_proof3} \Prob{L+B_{\eps_k} \subset X}\leq \liminf_{n\to\infty}\Prob{\cl(X_n^c)\cap L=\emptyset}\leq \limsup_{n\to\infty}\Prob{\cl(X_n^c)\cap L=\emptyset} \leq \Prob{L\subset X}. \end{equation} Since $$ \Prob{L+B_{\eps_k} \subset X}\uparrow \Prob{L\subset \Int X} =\Prob{L\subset X}\quad\text{as}\quad k\to\infty, $$ the desired convergence~\eqref{eq:thm_inclusion_proof1} follows upon sending $k\to\infty$ in~\eqref{eq:thm_inclusion_proof3}. \end{proof} If $F$ is an arbitrary closed set, then, in general, the convergence $X_n\dodn X$ does not imply the convergence of $X_n\cap F$ to $X\cap F$. The latter is equivalent to the convergence of the capacity functionals of $X_n\cap F$ on sets $L\in\sC^d$ such that \begin{displaymath} \Prob{(X\cap F)\cap L\neq\emptyset}=\Prob{(X\cap F)\cap \Int L\neq\emptyset}. \end{displaymath} At a first glance, the aforementioned implication looks plausible since the capacity functional of $X_n\cap F$ on $L$ is just the capacity function of $X_n$ on $F\cap L$. However, from the convergence $X_n\dodn X$ we can only deduce the convergence of their capacity functionals on sets $F\cap L$ under condition that \begin{displaymath} \Prob{X\cap F\cap L\neq\emptyset}= \Prob{X\cap \Int(F\cap L)\neq\emptyset}. \end{displaymath} This latter is too restrictive if $F$ has empty interior. The following result relies on an alternative argument in order to establish convergence in distribution of random sets intersected with a deterministic convex closed set containing the origin. \begin{lemma} \label{lemma:intersection} Let $X$ and $X_n$, $n\in\NN$, be random convex closed sets. Assume that $\Prob{0\in X}=\Prob{0\in \Int X}>0$ and $\Prob{0\in X_n}=\Prob{0\in \Int X_n}>0$ for all sufficiently large $n$. Assume that \eqref{eq:22} holds for all $L\in\sK^d$ satisfying \eqref{eq:2}. Let $F$ be a convex closed set which contains the origin. Then \begin{equation} \label{eq:10} \Prob{X_n\cap F\cap L\neq\emptyset, 0\in X_n} \to \Prob{X\cap F\cap L\neq\emptyset, 0\in X}\quad \text{as}\quad n\to\infty \end{equation} for each compact set $L$ in $\R^d$ such that \begin{equation}\label{eq:108} \Prob{(X\cap F)\cap L\neq\emptyset, 0\in X} =\Prob{(X\cap F)\cap \Int L\neq\emptyset, 0\in X}. \end{equation} \end{lemma} \begin{proof} Define the following auxiliary random closed sets \begin{displaymath} Y_n:= \begin{cases} X_n,&\text{if }0\in\Int X_n,\\ \varnothing,&\text{if }0\notin \Int X_n; \end{cases} \quad\text{and}\quad Y:= \begin{cases} X,&\text{if }0\in\Int X,\\ \varnothing,&\text{if }0\notin \Int X. \end{cases} \end{displaymath} By construction, random closed sets $Y_n$ and $Y$ are almost surely regular closed. Let us show with the help of Theorem~\ref{thm:inclusion}, that $Y_n\dodn Y$, as $n\to\infty$. Let $L$ be a nonempty compact set such that \begin{equation} \label{eq:20} \Prob{L\subset Y}=\Prob{L\subset \Int Y}. \end{equation} The latter is equivalent to \begin{displaymath} \Prob{L\subset X,0\in\Int X}=\Prob{L\subset\Int X,0\in\Int X}, \end{displaymath} and, since $\Prob{0\in X}=\Prob{0\in\Int X}$, to \begin{displaymath} \Prob{L\subset X,0\in X}=\Prob{L\subset\Int X,0\in\Int X}. \end{displaymath} Finally, by convexity of $X$ we see that~\eqref{eq:20} is the same as \begin{displaymath} \Prob{\conv(L\cup\{0\})\subset X}=\Prob{\conv(L\cup\{0\})\subset \Int X}. \end{displaymath} Thus, if a nonempty compact set $L$ and satisfies~\eqref{eq:20}, then $\conv(L\cup\{0\})\in\sK^d$ satisfies \eqref{eq:2} and we can use~\eqref{eq:22} to conclude that \begin{multline*} \Prob{L\subset Y_n} =\Prob{L\subset X_n,0\in X_n}=\Prob{\conv(L\cup\{0\})\subset X_n}\\ \to \Prob{\conv(L\cup\{0\})\subset X}= \Prob{L\subset X,0\in X} =\Prob{L\subset Y} \quad\text{as}\quad n\to \infty. \end{multline*} Theorem~\ref{thm:inclusion} yields that $Y_n\dodn Y$, as $n\to\infty$. Note that $Y$ is a.s.~either empty or contains $0$ in the interior. Thus, $Y\cap F$ is a.s.~either empty (thus, regular closed) or $Y$ contains $0$ in the interior. In both cases Lemma~\ref{lemma:int-det} is applicable and by continuous mapping theorem $Y_n\cap F \dodn Y\cap F$. The latter means that \begin{displaymath} \Prob{(Y_n\cap F)\cap L\neq\emptyset}\to \Prob{(Y\cap F)\cap L\neq\emptyset} \quad \text{as}\quad n\to\infty \end{displaymath} for all $L$ such that \begin{displaymath} \Prob{(Y\cap F)\cap L\neq\emptyset}=\Prob{(Y\cap F)\cap \Int L\neq\emptyset}. \end{displaymath} By definition of $Y_n$ and $Y$, this is the same as \eqref{eq:10} for $L$ satisfying \eqref{eq:108}. \end{proof} The next result follows either from Lemma~\ref{lemma:intersection} or from Lemma~\ref{lemma:int-det} and continuous mapping theorem. \begin{corollary} \label{cor:intersection} Let $X$ and $X_n$, $n\in\NN$, be random convex closed sets, whose interiors almost surely contain the origin. If $X_n\dodn X$, as $n\to\infty$, then $X_n\cap F\dodn X\cap F$, as $n\to\infty$, for each convex closed set $F$ which contains the origin. \end{corollary} The following result is used in the proof of Theorem~\ref{thm:main1} in order to establish convergence in distribution of (not necessarily convex) random closed sets by approximating them with convex ones. \begin{lemma} \label{lemma:approx} Let $(X_n)_{n\in\NN}$ be a sequence of random closed sets in $\R^d$. Assume that $Y_{m,n}^-\subset X_n\subset Y_{n,m}^+$ a.s.\ for all $n,m\in\NN$ and sequences $(Y_{n,m}^-)_{n\in\NN}$ and $(Y_{n,m}^+)_{n\in\NN}$ of random closed sets. Further, assume that, for each $m\in\NN$: \begin{itemize} \item[(i)] the random closed set $Y_{n,m}^+$ converges in distribution to a random closed set $Y_m^+$, as $n\to\infty$; \item[(ii)] there exists a random closed set $Y_m^-$ such that \begin{equation} \label{eq:11} \Prob{Y_{m,n}^-\cap L\neq\emptyset,0\in Y_{m,n}^-} \to \Prob{Y_{m}^-\cap L\neq\emptyset,0\in Y_{m}^-} \quad \text{as}\quad n\to\infty \end{equation} for all $L\in\sC^d$ which are continuity sets for $Y_m^-$. \end{itemize} Further assume that $\Prob{0\in Y_m^-}\to 1$, and that $Y_m^+\downarrow Z$, $Y_m^-\uparrow Z$ a.s., as $m\to\infty$, in the Fell topology for a random closed set $Z$. Then $X_n\dodn Z$, as $n\to\infty$. \end{lemma} \begin{proof} Since the family of regular closed compact sets constitute a convergence determining class, see Corollary~1.7.14 in~\cite{mo1}, it suffices to check that the capacity functional of $X_n$ converges to the capacity functional of $Z$ on all compact sets $L$, which are regularly closed and are continuity sets for $Z$. There exist sequences $(L_k^-)$ and $(L_k^+)$ of compact sets, which are continuity sets for $Z$ and all $(Y_m^-)$ and $(Y_m^+)$, respectively, and such that $L_k^-\uparrow \Int L$ and $L_k^+\downarrow L$ as $k\to\infty$. These sets can be chosen from the families of inner and outer parallel sets to $L$, see p.~148 in~\cite{schn2}. Then \begin{align*} \Prob{Y_{n,m}^-\cap L_k^-\neq\emptyset,0\in Y_{m,n}^-} \leq \Prob{X_n\cap L\neq\emptyset} \leq \Prob{Y_{n,m}^+\cap L_k^+\neq\emptyset}. \end{align*} Passing to the limit, as $n\to\infty$, yields that \begin{multline*} \Prob{Y_m^-\cap L_k^-\neq\emptyset,0\in Y_{m}^-} \leq \liminf_{n\to\infty} \Prob{X_n\cap L\neq\emptyset}\\ \leq \limsup_{n\to\infty} \Prob{X_n\cap L\neq\emptyset} \leq \Prob{Y_m^+\cap L_k^+\neq\emptyset}. \end{multline*} Note that the a.s.\ convergence of $Y_m^\pm$ to $Z$ implies the convergence in distribution. Sending $m\to\infty$ and using that $\Prob{0\in Y_m^-}\to 1$, we conclude \begin{displaymath} \Prob{Z\cap L_k^-\neq\emptyset} \leq \liminf_{n\to\infty} \Prob{X_n\cap L\neq\emptyset} \leq \limsup_{n\to\infty} \Prob{X_n\cap L\neq\emptyset} \leq \Prob{Z\cap L_k^+\neq\emptyset}. \end{displaymath} Finally, sending $k\to\infty$ gives \begin{displaymath} \Prob{Z\cap \Int L \neq\emptyset} \leq \liminf_{n\to\infty} \Prob{X_n\cap L\neq\emptyset} \leq \limsup_{n\to\infty} \Prob{X_n\cap L\neq\emptyset} \leq \Prob{Z\cap L\neq\emptyset}, \end{displaymath} which completes the proof since $\Prob{Z\cap L\neq\emptyset}=\Prob{Z\cap \Int L\neq\emptyset}$. \end{proof} \begin{proposition} \label{prop:marked-sets} Let $\Psi_n:=\{(X_1,\xi_1),\dots,(X_n,\xi_n)\}$, $n\in\NN$, be a sequence of binomial point processes on $(\sK^d_0\setminus\{0\})\times\R^d$ obtained by taking independent copies of a pair $(X,\xi)$, where $X$ is a random convex closed set and $\xi$ is a random vector in $\R^d$, which may depend on $X$. Furthermore, let $\Psi:=\{(Y_i,y_i),i\geq1\}$ be a locally finite Poisson process on $(\sK^d_0\setminus\{0\})\times\R^d$ with the intensity measure $\mu$. Then $n^{-1}\Psi_n:=\{(n^{-1}X_i,\xi_i):i=1,\dots,n\}$ converges in distribution to $\Psi$ if and only if the following convergence takes place \begin{equation} \label{eq:10v} n\Prob{n^{-1}X\not\subset L,\xi\in B}=n\Prob{(n^{-1}X,\xi)\in \sA_L^c \times B} \to\mu(\sA_L^c \times B) \quad \text{as}\; n\to\infty, \end{equation} for every $\mu$-continuous set $\sA_L^c \times B\subset (\sK^d_0\setminus\{0\})\times \R^d$, where \begin{displaymath} \sA_L:=\{A\in\sK^d_0\setminus\{0\}: A\subset L\}, \end{displaymath} and $L\in\sK^d_0\setminus\{0\}$ is an arbitrary convex compact set containing the origin and which is distinct from $\{0\}$. \end{proposition} \begin{proof} By a simple version of the Grigelionis theorem for binomial point processes (see, e.g., Proposition~11.1.IX in~\cite{dal:ver08} or Corollary~4.25 in~\cite{kalle17} or Theorem~4.2.5 in~\cite{mo1}), $n^{-1}\Psi_n\dodn\Psi$ if and only if \begin{equation} \label{eq:10v1} \mu_n(\sA\times B):= n\Prob{(n^{-1}X,\xi)\in \sA\times B} \to\mu(\sA\times B) \quad \text{as}\;n\to\infty, \end{equation} for all Borel $\sA$ in $\sK^d_0\setminus\{0\}$ and Borel $B$ in $\R^d$, such that $\sA\times B$ is a continuity set for $\mu$. Thus, we need to show that convergence \eqref{eq:10v1} follows from \eqref{eq:10v}. In other words, we need to show that the family of sets of the form $\sA_L^c \times B$ is the convergence determining class. Fix some $\eps>0$ and let $L_0:=B_\eps\subset \R^d$ be the closed centred ball of radius $\eps$. It is always possible to ensure that $\sA_{L_0}^c\times B$ is a continuity set for $\mu$. For each Borel $\sA$ in $\sK^d_0\setminus\{0\}$, put \begin{equation*} \label{eq:tildemu} \tilde{\mu}_n(\sA\times B) :=\frac{\mu_n\big((\sA\cap\sA_{L_0}^c)\times B\big)} {\mu_n\big(\sA_{L_0}^c\times \R^d\big)},\quad n\geq 1, \end{equation*} and define $\tilde{\mu}$ by the same transformation applied to $\mu$. Then $\tilde{\mu}_n$ is a probability measure on $(\sK^d_0\setminus\{0\})\times\R^d$, and so on $\sK^d\times\R^d$. Thus, $\mu_n$ defines the distribution of a random convex closed set $Z_n\times\{\zeta_n\}$ in $\sK^d\times\R^d$, which we can regard as a subset of $\sK^{d+1}$. Assume that we have shown that $\tilde{\mu}_n$ converges in distribution to $\tilde{\mu}$, as $n\to\infty$. Then, \eqref{eq:10v} implies \eqref{eq:10v1}. Indeed, it obviously suffices to assume in \eqref{eq:10v1} that $\sA$ is closed in the Hausdorff metric and is such that $\sA\times B$ is a $\tilde{\mu}$-continuous set. Then there exists an $\eps>0$ such that each $A\in\sA$ is not a subset of $L_0=B_\eps$. Then $\sA\cap\sA_{L_0}^c=\sA$, so that \begin{displaymath} \frac{\mu_n(\sA\times B)}{\mu_n(\sA_{L_0}^c\times \R^d)} =\tilde{\mu}_n(\sA\times B) \quad\to\quad \tilde{\mu}(\sA\times B) =\frac{\mu(\sA\times B)}{\mu(\sA_{L_0}^c\times\R^d)} \quad \text{as}\; n\to\infty. \end{displaymath} Since the denominators also converge in view of \eqref{eq:10v} we obtain \eqref{eq:10v1}. In order to check that $\tilde{\mu}_n$ converges in distribution to $\tilde{\mu}$ we shall employ Theorem~1.8.14 from \cite{mo1}. According to the cited theorem $\tilde{\mu}_n$ converges in distribution to $\tilde{\mu}$ if and only if $\tilde{\mu}_n(\sA_L\times B)\to\tilde{\mu}(\sA_L\times B)$ for all $L\in\sK^d$ and convex compact $B$ in $\R^d$ such that $\sA_L\times B$ is a continuity set for $\tilde{\mu}$ and $\tilde{\mu}(\sA_L\times B)\uparrow 1$ if $L$ and $B$ increase to the whole space. The latter is clearly the case, since $\Psi$ has a locally finite intensity measure, hence, at most a finite number of its points intersects the complement of the centred ball $B_r$ in $\R^{d+1}$ for any $r>0$. For the former, note that, for every $L\in\sK^d\setminus \{0\}$, \begin{align*} \tilde{\mu}_n(\sA_L\times B) &=\frac{\mu_n\big(\sA_{L_0}^c\times B\big) -\mu_n\big((\sA^c_L\cap \sA_{L_0}^c)\times B\big)}{\mu_n\big(\sA_{L_0}^c\times \R^d\big)} =\frac{\mu_n\big((\sA^c_L\cup \sA_{L_0}^c)\times B\big)-\mu_n\big(\sA^c_L\times B\big)} {\mu_n\big(\sA_{L_0}^c\times \R^d\big)}\\ &=\frac{\mu_n\big((\sA_L\cap \sA_{L_0})^c\times B\big)-\mu_n\big(\sA^c_L\times B \big)}{\mu_n(\sA_{L_0}^c\times \R^d\big)} =\frac{\mu_n\big(\sA^c_{L\cap L_0}\times B\big)-\mu_n\big(\sA^c_L\times B\big)} {\mu_n\big(\sA_{L_0}^c\times \R^d\big)}\\ &\to \frac{\mu\big(\sA^c_{L\cap L_0}\times B\big)-\mu\big(\sA^c_L\times B\big)} {\mu\big(\sA_{L_0}^c\times \R^d\big)} =\tilde{\mu}(\sA_L\times B)\quad \text{as}\; n\to\infty, \end{align*} where the convergence in the last line follows from~\eqref{eq:10v}. The proof is complete. \end{proof} \section*{Acknowledgement} AM was supported by the National Research Foundation of Ukraine (project 2020.02/0014 ``Asymptotic regimes of perturbed random walks: on the edge of modern and classical probability''). IM was supported by the Swiss National Science Foundation, Grant No. IZ73Z0\_152292. \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} \bibliographystyle{abbrv} \bibliography{./groups} \let\addcontentsline\oldaddcontentsline \end{document}
0.001979
TITLE: Prove that any number greater that one can be uniquely written in arbitrary base. QUESTION [0 upvotes]: Suppose $n$ and $a$ are positive integers greater than one. Prove that $n$ can be uniquely written in the form: $$n=c_0+c_1a+c_2a^2+...+c_ma^m$$ where $c_i$s are integers, $0\leq c_i\leq a-1$ and $0\leq i\leq m$ (use induction on $n$). REPLY [1 votes]: Let's prove this for any $a \geq 2$ and $n \geq 0$; we need to include $0$ and $1$ for the induction to work. First, as base cases consider the integers less than $a$. Expressing $n$ in the desired form forces all of the $c_i$ for $i \geq 1$ to be $0$, since otherwise the expression grows too large. Then there is a unique integer $0 \leq c_0=n < a$ which makes the expression valid, as required. Now with these cases covered, take as induction hypothesis that the claim is true for all $n<a^k$. Let $a^k\leq n < a^{k+1}$. Then there is a unique integer $c_k$ such that $c_k a^k \leq n < (c_k +1)a^k$, and we must have $1\leq c_k <a$ since otherwise we have $n<a^k$ or $n \geq a^{k+1}$, which contradicts our assumption on $n$. Further, $0 \leq n - c_k a^k < a^k$, so this value is by hypothesis expressible uniquely as a sum of the given form: $$n-c_k a^k = c_0+c_1a+c_2a^2+...+c_{k-1}a^{k-1}.$$ Therefore we obtain the unique expression $$n = c_0+c_1a+c_2a^2+...+c_{k-1}a^{k-1}+c_k a^k,$$ which completes the induction.
0.01112
GCD of two numbers in Python In the program, we will learn to find the GCD of two numbers that is Greatest Common Divisor. The highest common factor (HCF) or Greatest Common Divisor (GCD) of two given numbers is the largest or greatest positive integer that divides the two number perfectly. For example, here, we have two numbers 12 and 14 Output: GCD is 2 Algorithm: - Define a function named gcd(a,b) - Initialize small =0 and gd =0 - If condition is used to check if a is greater than b. - if true small==b, else small==a. - using for loop with range(1, small+1), check if((a % i == 0) and (b % i == 0)) - if true gd=I and gd value is returned in variable t - Take a and b as input from the user. - The function gcd(a,b) is called with a and b as parameters passed in the function. - Print the value in the variable t. - Exit Code: def gcd(a,b): small=0 gd=0 if a>b: small==b else: small==a for i in range(1, small+1): if((a % i == 0) and (b % i == 0)): gd=i return gd a=int(input("Enter the first number: ")) b=int(input("Enter second number: ")) t=gcd(a,b) print("GCD is:",t) Output: Enter the first number: 60 Enter second number: 48 GCD is: 12 Report Error/ Suggestion
0.994492
TITLE: Most interesting mathematics mistake? QUESTION [226 upvotes]: Some mistakes in mathematics made by extremely smart and famous people can eventually lead to interesting developments and theorems, e.g. Poincaré's 3d sphere characterization or the search to prove that Euclid's parallel axiom is really necessary unnecessary. But I also think there are less famous mistakes worth hearing about. So, here's a question: What's the most interesting mathematics mistake that you know of? EDIT: There is a similar question which has been closed as a duplicate to this one, but which also garnered some new answers. It can be found here: Failures that lead eventually to new mathematics REPLY [2 votes]: A posthumous book on number theory by Dirichlet appeared in 1859. It stated that Euclid's proof of the infinitude of primes was by contradiction, starting with an assumption that only finitely many primes exist and then deducing a contradiction. Euclid's actual proof, recast in modern language, was that if $S$ is any finite set of primes (with no assumption that it constains the smallest $n$ primes nor that it contains all primes), then the prime factors of $1+\prod S$ are not members of $S;$ hence there are always more primes than what one already has. For example, if $S=\{5,7\}$ then $1+\prod S=36=2\times2\times3\times3$ and the new primes are $2$ and $3.$ This requires no assumption that $S$ contains all primes. Only the assumption that $S$ contains all primes could justify the conclusion that $1+\prod S$ has no prime factors, and so "is therefore itself prime", to quote G. H. Hardy (no relation to me, as far as I know) on pages 122–123 of the 1908 edition of A Course of Pure Mathematics (but not in the posthumous 10th edition). The erroroneous belief that $1+\prod S$ is prime whenever $S$ is the set of the smallest $n$ primes for some $n$ has been held by some conscientious persons. The smallest (but not the only) counterexample is $1+(2\times3\times5\times7\times11\times13) = 59\times509.$ Catherine Woodgold and I examined in some detail the error of thinking that this proof is by contradiction in "Prime Simplicity", Mathematical Intelligencer, volume 31, number 4, Fall 2009, pages 44–52. REPLY [0 votes]: The question above says: Some mistakes in mathematics made by extremely smart and famous people can eventually lead to interesting developments and theorems, This will be a very minor example, included because the person who made the mistake is very smart and very famous, but not because it led to anything. Take three regular pentagons sharing a vertex and three edges, as in a dodecahedron. Set it down on a horizontal mirror, so that the three edges opposite the common vertex make contact with their reflections. These three pentagons and their mirror-images are (but are they??) six faces of a polyhedron that also has three faces that are rhombuses or rhombi or rhomboi or whatever they're called. So Donald Knuth once thought, according to what he said in a seminar that I attended. But the four edges of the putative rhomboi are not coplanar, so instead of those three faces you have six triangular faces.
0.126561
Global Smart Doorbell Camera Market 2019 - SkyBell Technologies, dbell, August Home, Ring The global Smart Doorbell Camera market presents a profound evaluation of basic elements of Smart Doorbell Camera industry such as production scale and profit generation. Market driving factors, newly adopted technologies, latest business methodologies have been discussed in this report. The report also forecasts the potential of the market and reviews thorough analysis of vital segments and regional markets. Get Sample of Global Smart Doorbell Camera Market Research Report at : The report includes essential detailing of vital facets of global Smart Doorbell Camera Smart Doorbell Camera Market are :August Home Ring SkyBell Technologies dbell Ding Labs EquesHome Smanos Vivint Zmodo It deeply elaborates numerous salient features of the market including rivalry environment, leading region-wise growth rate, prominent players in Smart Doorbell Camera industry, changing market dynamics, industrial environment, as well as segmentation based on types of service/product, and applications. Besides that, it discusses market size in terms of value and volume along with values that reflects globally generated revenue and gross sales. Browse Global Smart Doorbell Camera Market Report at : Most widely used downstream fields of Smart Doorbell Camera Market covered in this report are :Residential Commercial The Smart Doorbell Camera Smart Doorbell Camera , we will be happy to include this free of cost to enrich the final study. - Triangulate with your Own Data - Gain a Deeper Dive on a Specific Application, Geography, Customer or Competitor - Get Data as per your Format and Definition - Any level of Personalization
0.124671
Contact us now! Looking for a solid and cost effective telecom solution? We can help! Leave us a Message Name Email (never published) Website Subject: Feedback Support Question Website Question Other Question or Comment I found about you from:a friendfrom the webanother option I'd like to receive the newsletter Message reset all fields Leave us a Message
0.001192
\begin{document} \enlargethispage{3cm} \thispagestyle{empty} \begin{center} {\bf FROM KOSZUL DUALITY TO POINCAR\'E DUALITY} \end{center} \vspace{0.3cm} \begin{center} Michel DUBOIS-VIOLETTE \footnote{Laboratoire de Physique Th\'eorique, CNRS UMR 8627\\ Universit\'e Paris Sud 11, B\^atiment 210\\ F-91 405 Orsay Cedex\\ Michel.Dubois-Violette$@$u-psud.fr}\\ \end{center} \vspace{0,5cm} \begin{center} {\sl Dédié à Raymond Stora} \end{center} \vspace{0,5cm} \begin{abstract} We discuss the notion of Poincaré duality for graded algebras and its connections with the Koszul duality for quadratic Koszul algebras. The relevance of the Poincaré duality is pointed out for the existence of twisted potentials associated to Koszul algebras as well as for the extraction of a good generalization of Lie algebras among the quadratic-linear algebras. \end{abstract} \vfill \noindent LPT-ORSAY 11-78 \baselineskip=0,7cm \section*{Introduction} Koszul complexes and Koszul resolutions play a major role at several places in theoretical physics. In many cases, this is connected with the BRST methods introduced in \cite{bec-rou-sto:1974}, \cite{bec-rou-sto:1975}, \cite{bec-rou-sto:1976a}, \cite{bec-rou-sto:1976b}, \cite{tyu:1975}. For instance they enter in the classical BRST approach to constrained systems, in particular to gauge theory, and they are involved in the renormalization process of quantum field theory, see e.g. \cite{mcm:1984}, \cite{mdv:1987c}, \cite{fis-hen-sta-tei:1989}, \cite{hen-tei:1992}, \cite{sto:2005}.\\ Our aim here is to describe elements of the formulation of the Koszul duality and of the Poincaré duality for quadratic algebras and for non homogeneous quadratic algebras and to draw some important consequence of the Poincaré duality property for Koszul algebras.\\ Throughout this paper $\mathbb K$ denotes a (commutative) field and all vector spaces, algebras, etc. are over $\mathbb K$. We use everywhere the Einstein summation convention over the repeated up-down indices. \section{Graded algebras and Poincaré duality} \subsection{Graded algebras} The class of graded algebras involved here is the class of connected graded algebras $\cala$ of the form $\cala=T(E)/I$ where $E$ is a finite-dimensional vector space and where $I$ is a finitely generated graded ideal of the tensor algebra $T(E)$ such that $I=\oplus_{n\geq 2}I_n\subset \oplus_{n\geq 2}E^{\otimes^n}$. This class together with the homomorphisms of graded-algebras (of degree 0) as morphisms define the category $\mathbf{GrAlg}$.\\ For such an algebra $\cala=T(E)/I\in \mathbf{GrAlg}$ choosing a basis $(x^\lambda)_{\lambda\in\{1,\dots,d\}}$ of $E$ and a system of homogeneous independent generators $(f_\alpha)_{\alpha\in \{1,\dots,r\}}$ of $I$ with $(f_\alpha)\in E^{\otimes^{N_\alpha}}$ and $N_\alpha\geq 2$ for $\alpha\in \{1,\dots,r\}$, one can also write \[ \cala=\mathbb K\langle x^1,\dots, x^d\rangle/[f_1,\dots, f_r] \] where $[f_1,\dots,f_r]$ is the ideal $I$ generated by the $f_\alpha$. Define $M_{\alpha\lambda}\in E^{\otimes^{N_\alpha-1}}$ by setting $f_\alpha=M_{\alpha\lambda}\otimes x^\lambda\in E^{\otimes^{N_\alpha}}$. Then the presentation of $\cala$ by generators and relations is equivalent to the exactness of the sequence of left $\cala$-modules \begin{equation} \cala^r \stackrel{M}{\rightarrow} \cala^d \stackrel{x}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0 \label{pres} \end{equation} where $M$ means right multiplication by the matrix $(M_{\alpha\lambda})\in M_{d,r}(\cala)$, $x$ means right multiplication by the column $(x^\lambda)$ and where $\varepsilon$ is the projection onto $\cala_0=\mathbb K$, \cite{art-sch:1987}. In more intrinsic notations the exact sequence (\ref{pres}) reads \begin{equation} \cala\otimes R\rightarrow \cala\otimes E \stackrel{m}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0 \label{Ipres} \end{equation} where $R$ is the graded subspace of $T(E)$ spanned by the $f_\alpha$ $(\alpha\in\{1,\dots,r\})$, $m$ is the product in $\cala$ (remind that $E=\cala_1$) and where the first arrow is as in (\ref{pres}).\\ When $R$ is homogeneous of degree $N$ ($N\geq 2$), i.e. $R\subset E^{\otimes^N}$, then $\cala$ is said to be a $N$-{\sl homogeneous algebra}: for $N=2$ one speaks of a quadratic algebra, for $N=3$ one speaks of a cubic algebra, etc. \subsection{Global dimension} The exact sequence (\ref{Ipres}) of presentation of $\cala$ can be extended as a minimal projective resolution of the trivial left module $\mathbb K$, i.e. as an exact sequence of left modules \[ \cdots \rightarrow M_n \rightarrow \cdots \rightarrow M_2 \rightarrow M_1 \rightarrow M_0 \rightarrow \mathbb K \rightarrow 0 \] where the $M_n$ are projective i.e. in this graded case free left-modules \cite{car:1958}, which is minimal ; one has $M_0=\cala$, $M_1=\cala\otimes E$, $M_2=\cala\otimes R$ and more generally here $M_n =\cala\otimes E_n$ where the $E_n$ are finite-dimensional vector spaces. If such a minimal resolution has finite length $D<\infty$, i.e. reads \begin{equation} 0\rightarrow \cala\otimes E_D \rightarrow \cdots \rightarrow \cala\otimes E \rightarrow \cala\rightarrow \mathbb K\rightarrow 0 \label{Mres} \end{equation} with $E_D\not=0$, then $D$ is an invariant called the {\sl left projective dimension} of $\mathbb K$ and it turns out that $D$ which coincide with the right projective dimension of $\mathbb K$ is also the sup of the lengths of the minimal projective resolutions of the left and of the right $\cala$-modules \cite{car:1958} which is called the {\sl global dimension} of $\cala$. Furthermore it was recently shown \cite{ber:2005} that this global dimension $D$ also coincides with the Hochschild dimension in homology as well as in cohomology. Thus for an algebra $\cala\in \mathbf{GrAlg}$, there is a unique notion of dimension from a homological point of view which is its global dimension $g\ell\dim(\cala)=D$ whenever it is finite. \subsection{Poincaré duality versus Gorenstein property} Let $\cala\in \mathbf{GrAlg}$ be of finite global dimension $D$. Then one has a minimal free resolution \[ 0\rightarrow M_D\rightarrow \cdots\rightarrow M_0\rightarrow \mathbb K \rightarrow 0 \] with $M_n=\cala\otimes E_n$, $\dim(E_n)<\infty$ and $E_2\simeq R$, $E_1\simeq E$ and $E_0\simeq \mathbb K$. By applying the functor $\Hom_\cala(\bullet, \cala)$ to the chain complex of free left $\cala$-module \begin{equation} 0\rightarrow M_D\rightarrow \cdots \rightarrow M_0\rightarrow 0 \label{M} \end{equation} one obtains the cochain complex \begin{equation} 0\rightarrow M'_0\rightarrow\cdots \rightarrow M'_D\rightarrow 0 \label{M'} \end{equation} of free right $\cala$-modules with $M'_n\simeq E^\ast_n \otimes \cala$ where for any vector space $F$, one denotes by $F^\ast$ its dual vector space.\\ The algebra $\cala\in \mathbf{GrAlg}$ is said to be {\sl Gorenstein} whenever one has \[ \left\{ \begin{array}{l} H^n(M')=0, \> \> \> \text{for}\> \> \> n\not= D\\ H^D(M')=\mathbb K \end{array} \right. \] which reads $\Ext^n_\cala(\mathbb K, \cala)=\delta^{nD}\mathbb K$ by definition $(\delta^{nD}=0$ for $n\not=D$ and $\delta^{DD}=1$). This means that one has \[ E^\ast_{D-n}\simeq E_n \] which is our version of the Poincaré duality.\\ Notice that one has \[ \Ext^n_\cala(\mathbb K, \mathbb K)\simeq E^\ast_n \] which follows easily from the definitions. \section{Quadratic algebras} \subsection{Koszul duality and the Koszul complex} A (homogeneous) {\sl quadratic algebra} $\cala$ is a graded algebra of the form \[ \cala=A(E,R)=T(E)/[R] \] where $E$ is a finite-dimensional vector space and where $R\subset E\otimes E$ is a linear subspace of $E\otimes E$ ($[R]$ denotes as before the ideal of $T(E)$ generated by $R$), see for instances \cite{man:1988}, \cite{pol-pos:2005}.\\ Given a quadratic algebra $\cala=A(E,R)$, one defines another quadratic algebra $\cala^!$ called the {\sl Koszul dual algebra} of $\cala$ by setting \[ \cala^!=A(E^\ast,R^\perp) \] with $R^\perp$ defined by \[ R^\perp = \{\omega\in E^\ast \otimes E^\ast \vert \omega(r)=0,\>\> \> \forall r\in R\} \] where we have made the identification $E^\ast\otimes E^\ast=(E\otimes E)^\ast$ which is allowed since one has $\dim(E)<\infty$.\\ One has $\cala^!=\otimes_{n\geq 0} \cala^!_n$ and the subspace $\cala^!_n$ of elements of degree $n$ of $\cala^!$ is given by \[ \cala^!_n=E^{\ast\otimes^n}/\sum_k E^{\ast\otimes^k}\otimes R^\perp \otimes E^{\ast\otimes^{n-k-2}} \] which is equivalent for its dual to \begin{equation} \cala^{!\ast}_n=\cap_{0\leq k\leq n-2} E^{\otimes^k}\otimes R\otimes E^{\otimes^{n-k-2}} \label{ddKn} \end{equation} for any $n\in \mathbb N$. The {\sl Koszul complex} of $\cala$ is then defined to be the chain complex $K(\cala)$ of free left $\cala$-module given by \begin{equation} \cdots \stackrel{d}{\rightarrow} \cala\otimes \cala^{!\ast}_{n+1}\stackrel{d}{\rightarrow} \cala\otimes \cala^{!\ast}_n \stackrel{d}{\rightarrow}\cdots \label{Kc} \end{equation} where $d$ is induced by the left $\cala$-module homomorphisms \[ d:\cala\otimes E^{\otimes^{n+1}}\rightarrow \cala\otimes E^{\otimes^n} \] given by setting \[ d(a\otimes (e_0\otimes e_1\otimes \cdots \otimes e_n))=ae_0 \otimes (e_1\otimes \cdots \otimes e_n) \] for $n\in \mathbb N$ and $d(a)=0$ for $a\in \cala$. \subsection{Koszul quadratic algebras} A quadratic algebra $\cala$ is said to be {\sl Koszul} whenever its Koszul complex $K(\cala)$ is acyclic in positive degrees i.e. iff one has $H_n(K(\cala))=0$ for $n\geq 1$. If $\cala$ is Koszul, one has then $H_0(K(\cala))=\mathbb K$ so that the Koszul complex gives a free resolution \[ K(A)\rightarrow \mathbb K\rightarrow 0 \] of the trivial left module $\mathbb K$ which is in fact a minimal projective resolution of $\mathbb K$, \cite{pri:1970}. Thus, if $\cala$ is Koszul of finite global dimension $D$, one has $K_n(\cala)=0$ for $n>D$ and $K_D(\cala)\not=0$.\\ Let $\cala$ be a quadratic algebra and let us apply the functor $\Hom_\cala(\bullet, \cala)$ to the Koszul complex which is a chain complex of left $\cala$-modules. One then obtains the cochain complex $L(\cala)$ of right $\cala$-modules given by \begin{equation} \cdots \stackrel{\delta}{\rightarrow} \cala^!_n\otimes \cala\stackrel{\delta}{\rightarrow} \cala^!_{n+1}\otimes \cala\stackrel{\delta}{\rightarrow}\cdots \label{Lc} \end{equation} where $\delta$ is the left multiplication by $x_\lambda^\ast \otimes x^\lambda$ for $(x^\lambda)$ a basis of $E=\cala_1$ with dual basis $(x^\ast_\lambda)$ of $E^\ast=\cala^!_1$.\\ Assume now that $\cala$ is a Koszul algebra of finite global dimension $D$. Then it follows from above that $\cala$ is Gorenstein if and only if one has \[ H^n(L(\cala))=\delta^{nD}\mathbb K \] for the cohomology of $L(\cala)$. Thus for Koszul algebras of finite global dimension, the Poincaré duality property is controlled by the cohomology of $L(\cala)$. \subsection{Twisted potentials for Koszul-Gorenstein algebras} Let $V$ be a vector space and let $n\geq 1$ be a positive integer, then a\linebreak[4] $(n+1)$-linear form $w$ on $V$ is said to be {\sl preregular} \cite{mdv:2005}, \cite{mdv:2007} iff it satisfies the following conditions (i) and (ii).\\ (i) If $X\in V$ is such that $w(X,X_1,\dots, X_n)=0$ for any $X_1,\dots,X_n\in V$, then $X=0$.\\ (ii) There is an element $Q_w\in GL(V)$ such that one has \[ w(X_0,\dots,X_{n-1},X_n)=w(Q_wX_n,X_0,\dots,X_{n-1}) \] for any $X_0,\dots, X_n\in V$.\\ It follows from (i) that $Q_w$ as in (ii) is unique. Property (i) will be refered to as 1-site nondegeneracy while (ii) will be refered to as twisted cyclicity.\\ Let $E$ be a finite-dimensional vector space, then an element $w$ of $E^{\otimes^D}$ is the same thing as a $D$-linear form on the dual $E^\ast$ of $E$. To make contact with the terminology of \cite{gin:2006}we will say that $w$ is a {\sl twisted potential} of degree $D$ on $E$ if the corresponding $D$-linear form on $E^\ast$ is preregular.\\ Let $w\in E^{\otimes^D}$ be a twisted potential and let $w_{\lambda_1\dots \lambda_D}$ be its components in the basis $(x^\lambda)_{\lambda\in \{1,\dots,d\}}$ of $E$, i.e. one has $w=w_{\lambda_1\dots \lambda_D}x^{\lambda_1}\otimes \dots \otimes x^{\lambda_D}$. Let $R_w\subset E\otimes E$ be the subspace of $E\otimes E$ spanned by the $w_{\lambda_1\dots \lambda_{D-2}\mu\nu} x^\mu\otimes x^\nu$ for $\lambda_1,\dots,\lambda_{D-2}\in \{1,\dots,d\}$. In this way one can associate to $w$ the quadratic algebra $\cala(w,2)=A(E,R_w)$. The content of the main result of \cite{mdv:2005} or of \cite{mdv:2007} (Theorem 4.3 of \cite{mdv:2005} or Theorem 11 of \cite{mdv:2007}) applied to quadratic algebras (case $N=2$) is the following \begin{theorem}\label{POT} Let $\cala$ be a quadratic Koszul algebra of finite global dimension $D$ which is Gorenstein. Then $\cala=\cala(w,2)$ for some twisted potential $w$ of degree $D$. \end{theorem} In the case where $Q_w=(-1)^{D+1}$, $\cala$ is a quadratic Calabi-Yau algebra.\\ This result is as mentioned the particular case for $N=2$ of a general result for $N$-homogeneous algebras (i.e. relations of degree N), see in \cite{mdv:2005}, \cite{mdv:2007}. The case $N=3$ (cubic algebras) contains the important example of the Yang-Mills algebra \cite{ac-mdv:2002b} which is a graded Calabi-Yau algebra in the sense of \cite{gin:2006}.\\ As pointed out in \cite{mdv:2007}, $\bbbone \otimes w\>\> (\bbbone\in \cala)$ has the interpretation of a (twisted) noncommutative volume form, see Theorem 10 of \cite{mdv:2007}.\\ Thus the Poincaré duality corresponding to the Gorenstein property implies for the Koszul algebras that they are derived from twisted potentials. Let us now give some examples \subsection{Examples} \begin{enumerate} \item {\sl The symmetric algebra} $\cala=SE$. One has $\cala=SE=A(E,R)$ with $R=\wedge^2 E\subset E\otimes E$, therefore $\cala^!=\wedge E^\ast$ is the exterior algebra of $E^\ast$. It follows that the Koszul complex is $SE\otimes \wedge^\bullet E$ with the Koszul differential $d:SE\otimes \wedge^{n+1} E\rightarrow SE\otimes \wedge^n E$. But $SE\otimes \wedge^\bullet E$ is also the algebra of polynomial differential forms and, if $\delta$ denotes the exterior differential, the derivation $d\delta + \delta d$ of degree 0 coincides with the total degree i.e. $(d\delta+\delta d)x=(r+s)x$ for $x\in S^r E\otimes \wedge^s E$. Thus $\delta$ is a homotopy for $d$ and $d$ is a homotopy for $\delta$ whenever $r+s\not=0$. This implies that $SE$ is Koszul of global dimension $D=\dim(E)$ and this also implies the formal Poincaré lemma.\\ The algebra $\cala=SE$ has also the Gorenstein property which here reduces to the usual Poincaré duality property. The corresponding (twisted) potential reads \[ w=\varepsilon_{\lambda_1\dots \lambda_D}x^{\lambda_1}\otimes \dots \otimes x^{\lambda_D} \] where $D=\dim(E)$ and where $\varepsilon_{\lambda_1\dots \lambda_D}$ is completely antisymmetric with $\varepsilon_{1\dots D}=1$. It is clear that $v=\bbbone\otimes w$ is a volume form in the classical sense. \item {\sl The tensor algebra} $\cala=T(E)$. One has $\cala=T(E)=A(E,0)$ so that $\cala^!=\mathbb K\bbbone \oplus E=A(E,E\otimes E)$ with trivial product between the elements of $E$. The Koszul complex is \[ 0\rightarrow T(E)\otimes E \stackrel{m}{\rightarrow} T(E)\rightarrow 0 \] and it is obvious that $m$ is injective so $T(E)$ is Koszul of global dimension $D=1$ but it is clearly not Gorenstein, i.e. one has not the Poincaré duality here. \item {\sl Koszul duals of Koszul algebras}. It is not hard to show that if $\cala$ is a quadratic algebra, then $\cala$ is Koszul if and only if $\cala^!$ is Koszul. Thus for instance $\wedge E$ and $\mathbb K\bbbone \oplus E$ are Koszul; however they are not Gorenstein, (they are not of finite global dimension). It is worth noticing here that this property is specific for quadratic algebras. Indeed there is a notion of Koszulity for $N$-homogeneous algebras which was introduced in \cite{ber:2001a} and a generalization of the Koszul duality for these algebras defined in \cite{ber-mdv-wam:2003} but the Koszulity is not stable by Koszul duality for $N\geq 3$. \item {\sl A deformed symmetric algebra}. Let $\cala$ be the algebra generated by the 3 elements $\nabla_0,\nabla_1,\nabla_2$ with the relations \begin{equation} \left\{ \begin{array}{l} \mu^2\nabla_2\nabla_0-\nabla_0\nabla_2=0\\ \mu^4\nabla_1\nabla_0-\nabla_0\nabla_1=0 \\ \mu^4\nabla_2\nabla_1-\nabla_1\nabla_2=0 \end{array} \right. \label{hrW} \end{equation} Where $\mu\in \mathbb K$ with $\mu\not=0$. This is a quadratic algebra with Koszul dual $\cala^!$ which is generated by the 3 elements $\omega_0$, $\omega_1,\omega_2$ with the relations \begin{equation} \left\{ \begin{array}{l} \omega^2_0=0, \omega^2_1=0, \omega^2_2=0\\ \omega_2\omega_0 + \mu^2\omega_0\omega_2=0\\ \omega_1\omega_0+\mu^4\omega_0\omega_1=0\\ \omega_2\omega_1+\mu^4\omega_1\omega_2=0 \end{array} \right. \label{dhrW} \end{equation} It can be shown that $\cala$ is Koszul of global dimension 3 and is Gorenstein\cite{gur:1990},\cite{wam:1993}, \cite{mdv:2010}. The corresponding (twisted) potential reads \[ \begin{array}{lll} w=\mu^2(\nabla_1\otimes \nabla_2\otimes \nabla_0 & + &\nabla_2\otimes \nabla_0\otimes \nabla_1+\mu^{-6}\nabla_0\otimes \nabla_1\otimes \nabla_2)\\ & - &(\nabla_0\otimes \nabla_2\otimes \nabla_1+\nabla_1\otimes \nabla_0\otimes \nabla_2+\mu^6\nabla_2\otimes \nabla_1\otimes \nabla_0) \end{array} \] while $Q_w\in GL(3, \mathbb K)$ is given by the diagonal matrix with \[ (Q_w)^0_0=\mu^{-6},\>\> (Q_w)^1_1=1,\>\> (Q_w)^2_2=\mu^6 \] in the basis $(\nabla_0,\nabla_1,\nabla_2)$. \end{enumerate} \section{Nonhomogeneous quadratic algebras} \subsection{Poincaré-Birkhoff-Witt (PBW) property} A {\sl nonhomogeneous quadratic algebra} \cite{pos:1993}, \cite{bra-gai:1996}, \cite{flo:2006}is an algebra $\fraca$ of the form \[ \fraca=A(E,P)=T(E)/[P] \] where $E$ is a finite dimensional vector space and where $P\subset F^2(T(E))$ is a linear subspace of $F^2(T(E))=\oplus^2_{m=0} E^{\otimes^m}$. Here and in the following the tensor algebra $T(E)$ is endowed with its natural filtration $F^n(T(E))=\oplus_{m\leq n} E^{\otimes^m}$ associated to its graduation. The filtration of $T(E)$ induces a filtration $F^n(\fraca)$ of $\fraca$ and the graded algebra \[ \gr (\fraca)=\oplus_n F^n(\fraca)/F^{n-1}(\fraca) \] is the {\sl associated graded algebra} to the filtered algebra $\fraca$. Let $R\subset E \otimes E$ be the image of $P$ by the canonical projection of $F^2(T(E))$ onto $E\otimes E$. Then $\cala=A(E,R)=T(E)/[R]$ is a (homogeneous) quadratic algebra which will be called the {\sl quadratic part} of $\fraca$. There is a canonical homomorphism \[ can:\cala\rightarrow \gr(\fraca) \] of graded algebra which is surjective. The algebra is said to have the {\sl Poincaré-Birkhoff-Witt (PBW) property} if this canoncal homomorphism is injective, i.e. whenever $can$ is an isomorphism. One has the following theorem \cite{bra-gai:1996}. \begin{theorem}\label{PBW} Let $\fraca$ and $\cala$ be as above. If $\fraca$ has the PBW property then the following condition $\mathrm{(i)}$ and $\mathrm{(ii)}$ are satisfied :\\ $\mathrm{(i)}$ \hspace{0,1cm} $P\cap F^1(T(E))=0$\\ $\mathrm{(ii)}$ \hspace{0,1cm} $(PE+EP)\cap F^2(T(E))\subset P$.\\ If $\cala$ is Koszul then, conversely, conditions $\mathrm{(i)}$ and $\mathrm{(ii)}$ imply that $\fraca$ has the PBW property. \end{theorem} Condition (i) means that one has linear mappings $\varphi:R\rightarrow E$ and\linebreak[4] $\varphi_0 : R\rightarrow \mathbb K$ such that \[ P=\{r-\varphi(r)-\varphi_0(r)\bbbone\>\vert\> r\in R\} \] i.e. $P$ is obtained by adding second members of lower degrees to the quadratic relations $R$. Concerning condition (ii) one has the following. \begin{proposition}\label{II} Assume that Condition $\mathrm{(i)}$ of the last theorem is satisfied. Then, Condition $\mathrm{(ii)}$ above is equivalent to the following conditions $\mathrm{(a)}$, $\mathrm{(b)}$ and $\mathrm{(c)}$:\\ $\mathrm{(a)}$\hspace{0,1cm} $(\varphi\otimes I-I\otimes \varphi)(R\otimes E\cap E\otimes R)\subset R$\\ $\mathrm{(b)}$ \hspace{0,1cm} $(\varphi\circ (\varphi\otimes I-I\otimes \varphi)+ (\varphi_0\otimes I-I\otimes \varphi_0))(R\otimes E\cap E\otimes R)=0$\\ $\mathrm{(c)}$ \hspace{0,1cm} $\varphi_0\circ (\varphi\otimes I-I\otimes \varphi) (R\otimes E\cap E\otimes R)=0$ where $I$ denotes the identity mapping of $E$ onto itself. \end{proposition} A nonhomogeneous quadratic algebra $\fraca$ with quadratic part $\cala$ is said to be {\sl Koszul} if $\fraca$ has the PBW property and $\cala$ is Koszul. \subsection{Nonhomogeneous Koszul duality} Let $\fraca=A(E,P)$ be a nonhomogeneous quadratic algebra with quadratic part $\cala=A(E,R)$, assume that Condition (i) of Theorem \ref{PBW} is satisfied and let $\varphi:R\rightarrow E$ and $\varphi_0:R\rightarrow \mathbb K$ be as in 3.1. Consider the transposed $\varphi^t:E^\ast\rightarrow R^\ast$ and $\varphi^t_0:\mathbb K\rightarrow R^\ast$ of $\varphi$ and $\varphi_0$ and notice that one has by definition of $\cala^!$ that $\cala^!_1=E^\ast$, $\cala^!_2=R^\ast$ and $\cala^!_3=(R\otimes E\cap E\otimes R)^\ast$, (see in 2.1 formula (\ref{ddKn})), so one can write (the minus sign is put here to match the usual conventions) \begin{equation} -\varphi^t:\cala^!_1\rightarrow \cala^!_2,\> -\varphi^t_0(1)=F\in \cala^!_2 \label{Dual} \end{equation} and one has the following result \cite{pos:1993}. \begin{theorem}\label{KD} Conditions $\mathrm{(a)}$, $\mathrm{(b)}$ and $\mathrm{(c)}$ of Proposition \ref{II} are equivalent to the following conditions $\mathrm{(a')}$, $\mathrm{(b')}$ and $\mathrm{(c')}$ :\\ $\mathrm{(a')}$\hspace{0,1cm} $-\varphi^t$ extends as an antiderivation $\delta$ of $\cala^!$\\ $\mathrm{(b')}$\hspace{0,1cm} $\delta^2(x)=[F,x],\>\> \forall x\in \cala^!$\\ $\mathrm{(c')}$\hspace{0,1cm} $\delta(F)=0$. \end{theorem} A graded algebra equipped with an antiderivation $\delta$ of degree 1 and an element $F$ of degree 2 satisfying the conditions $\mathrm{(b')}$ and $\mathrm{(c')}$ above is refered to as a {\sl curved graded differential algebra} \cite{pos:1993}.\\ Thus Theorem \ref{KD} combined with Theorem \ref{PBW} and Proposition \ref{II} means that the correspondance $\fraca\mapsto (\cala^!,\delta,F)$ define a contravariant functor from the category of nonhomogeneous quadratic algebras satisfying the conditions (i) and (ii) of Theorem \ref{PBW} to the category of curved differential quadratic algebras (with the obvious appropriate notions of morphism). One can summarize the Koszul duality of \cite{pos:1993} for non homogeneous quadratic algebras by the following. \begin{theorem}\label{POS} The above correspondence defines an anti-isomorphism between the category of nonhomogeneous quadratic algebras satisfying Conditions $\mathrm{(i)}$ and $\mathrm{(ii)}$ of Theorem \ref{PBW} and the category of curved differential quadratic algebras which induces an anti-isomorphism between the category of nonhomogeneous quadratic Koszul algebras and the category of curved differential quadratic Koszul algebras. \end{theorem} There are two important classes of nonhomogeneous quadratic algebras $\fraca$ satisfying the conditions (i) and (ii) of Theorem \ref{PBW}. The first one corresponds to the case $\varphi_0=0$ which is equivalent to $F=0$ while the second one corresponds to $\varphi=0$ which is equivalent to $\delta=0$. An algebra $\fraca$ of the first class is called a {\sl quadratic-linear algebra} \cite{pol-pos:2005} and corresponds to a differential quadratic algebra $(\cala^!,\delta)$ while an algebra $\fraca$ of the second class corresponds to a quadratic algebra $\cala^!$ equipped with a central element $F$ of degree 2. \subsection{Examples} \begin{enumerate} \item {\sl Universal enveloping algebras of Lie algebras}. Let $\fracg$ be a finite-dimen\-sional Lie algebras then its universal enveloping algebra $\fraca=U(\fracg)$ is Koszul quadratic-linear. Indeed one has $\cala=S\fracg$ which is a Koszul quadratic algebra while the PBW property is here the classical PBW property of $U(\fracg)$. The corresponding differential quadratic algebra $(\cala^!,\delta)$ is $(\wedge\fracg^\ast,\delta)$, i.e. the exterior algebra of the dual vector space $\fracg^\ast$ of $\fracg$ endowed with the Koszul differential $\delta$. Notice that this latter differential algebra is the basic building block to construct the Chevalley-Eilenberg cochain complexes. Notice also that $\cala=S\fracg$ is not only Koszul but is also Gorenstein (Poincaré duality property). \item {\sl Adjoining a unit element to an associative algebra}. Let $A$ be a finite-dimensional associative algebra and let \[ \fraca=\tilde A=T(A) / \left[\{ x \otimes y-xy, y \in A \} \right] \] be the algebra obtained by adjoining a unit $\bbbone$ to $A$ ($\tilde A=\mathbb K \bbbone\oplus A$, etc.). This is again a Koszul quadratic-linear algebra. Indeed the PBW property is here equivalent to the associativity of $A$ while the quadratic part is $\cala=T(A^\ast)^!$ which is again $\mathbb K\bbbone\oplus A$ as vector space but with a vanishing product between the elements of $A$ and is a Koszul quadratic algebra. The corresponding differential quadratic algebra $(\cala^!,\delta)$ is $(T(A^\ast),\delta)$ where $\delta$ is the antiderivation extension of minus the transposed $m^t:A^\ast\rightarrow A^\ast\otimes A^\ast$ of the product $m$ of $A$. Again $(T_+(A^\ast),\delta)$ is the basic building block to construct the Hochschild cochain complexes. Notice however that $\cala=T(A^\ast)^!$ is not Gorenstein (no Poincaré duality). \item {\sl A deformed universal enveloping algebra}. Let $\fraca$ be the algebra generated by the 3 elements $\nabla_0,\nabla_1,\nabla_2$ with relations \begin{equation} \left \{ \begin{array}{l} \mu^2\nabla_2\nabla_0-\nabla_0\nabla_2=\mu\nabla_1\\ \mu^4\nabla_1\nabla_0-\nabla_0\nabla_1=\mu^2(1+\mu^2)\nabla_0\\ \mu^4\nabla_2\nabla_1-\nabla_1\nabla_2=\mu^2(1+\mu^2)\nabla_2 \end{array} \right. \label{irW} \end{equation} This is again a Koszul quadratic-linear algebra with homogeneous part $\cala$ given by the example 4 of 2.4 which is Koszul-Gorenstein. The corresponding differential quadratic algebra $(\cala^!,\delta)$ is the algebra $\cala^!$ of Example 4 of 2.4 with quadratic relations (\ref{dhrW}) endowed with the differential $\delta$ given by \begin{equation} \left \{ \begin{array}{l} \delta\omega_0+\mu^2(1+\mu^2)\omega_0\omega_1=0\\ \delta\omega_1+\mu \omega_0\omega_2=0\\ \delta\omega_2+\mu^2(1+\mu^2)\omega_1\omega_2=0 \end{array} \right. \label{diffW} \end{equation} which corresponds to the left covariant differential calculus on the twisted $SU(2)$ group of \cite{wor:1987b}. \item {\sl Canonical commutation relations algebra}. Let $E=\mathbb K^{2n}$ with basis $(q^\lambda, p_\mu)$, $\lambda, \mu\in \{1,\dots,n\}$ and let $i\hbar\in \mathbb K$ with $i\hbar\not=0$. Consider the nonhomogeneous quadratic algebra $\fraca$ generated by the $q^\lambda,p_\mu$ with relations \[ q^\lambda q^\mu-q^\mu q^\lambda=0,\>\>p_\lambda p_\mu-p_\mu p_\lambda=0,\>\>q^\lambda p_\mu-p_\mu q^\lambda=i\hbar \delta^\lambda_\mu\bbbone \] for $\lambda,\mu\in \{1,\dots, n\}$. The quadratic part of $\fraca$ is the symmetric algebra $\cala=SE$ which is Koszul, the property (i) of Theorem \ref{PBW} is obvious and one has $\varphi=0$ and $\varphi_0$ is such that its transposed $\varphi^t_0$ is given by \[ -\varphi^t_0(1)=F=-(i\hbar)^{-1} q^\ast_\lambda\wedge p^{\lambda^\ast} \] which is central in $\cala^!=\wedge(E^\ast)$ where $(q^\ast_\lambda,p^{\mu\ast})$ is the dual basis of $(q^\lambda,p_\mu)$. This implies that $\fraca$ has the PBW property and therefore is Koszul. \item {\sl Clifford algebra (C.A.R. algebra)}. Let $E=\mathbb K^n$ with canonical basis $(\gamma_\lambda)$, $\lambda\in \{1,\dots,n\}$ and consider the nonhomogeneous quadratic algebra $\fraca=C(n)$ generated by the elements $\gamma_\lambda$, $\lambda\in \{1,\dots, n\}$ with relations \[ \gamma_\mu \gamma_\nu+\gamma_\nu \gamma_\mu=2\delta_{\mu\nu}\bbbone \] for $\mu,\nu\in\{1,\dots,n\}$. The quadratic part of $\fraca$ is then the exterior algebra $\cala=\wedge E$ which is Koszul, the property (i) of Theorem \ref{PBW} is obvious and one has again $\varphi=0$ and $\varphi^t_0$ is given by \[ -\varphi^t_0(1)=F=-\frac{1}{2}\sum \gamma^{\lambda\ast} \vee \gamma^{\lambda\ast} \] which is a central element of $\cala^!=SE^\ast$ (which is commutative). It again follows that $\fraca$ is Koszul (i.e. PBW + $\cala$ Koszul). \item {\sl Remarks on the generic case}. Let $\cala$ be a (homogeneous) quadratic algebra which is Koszul. In general (for generic $\cala$) any nonhomogeneous quadratic algebra $\fraca$ which has $\cala$ as quadratic part and has the PBW property is such that one has both $\varphi\not=0$ and $\varphi_0\not=0$ or is trivial in the sense that it coincides with $\cala$, i.e. $\varphi=0$ and $\varphi_0=0$. This is the case for instance when $\cala$ is the 4-dimensional Sklyanin algebra \cite{skl:1982}, \cite{smi-sta:1992} for generic values of its parameters \cite{bel-ac-mdv:2011}. Thus, Examples 1, 2, 3, 4, 5 above are rather particular from this point of view. However the next section will be devoted to a generalization of Lie algebra which has been introduced in \cite{mdv-lan:2011} and which involves quadratic-linear algebras, i.e. for which $\varphi_0=0$. \end{enumerate} \section{Lie prealgebras} \subsection{Prealgebras} By a (finite-dimensional) {\sl prealgebra} we here mean a triple $(E,R,\varphi)$ where $E$ is a finite-dimensional vector space, $R\subset E\otimes E$ is a linear subspace of $E\otimes E$ and $\varphi:R\rightarrow E$ is a linear mapping of $R$ into $E$. Given a supplementary $R'$ to $R$ in $E\otimes E$, $R\oplus R'=E\otimes E$, the corresponding projector $P$ of $E\otimes E$ onto $R$ allows to define a bilinear product $\varphi\circ P:E\otimes E\rightarrow E$, i.e. a structure of algebra on $E$. The point is that there is generally no natural supplementary of $R$. Exception are $R=E\otimes E$ of course and $R=\wedge^2 E\subset E\otimes E$ for which there is the canonical $GL(E)$-invariant supplementary $R'=S^2E\subset E\otimes E$ which leads to an antisymmetric product on $E$, (e.g. case of the Lie algebras). \\ Given a prealgebra $(E,R,\varphi)$, there are two natural associated algebras : \begin{enumerate} \item The nonhomogeneous quadratic algebra \[ \fraca_E=T(E)/[\{r-\varphi(r)\>\> \vert\>\> r\in R\}] \] which will be called its {\sl enveloping algebra}. \item The quadratic part $\cala_E$ of $\fraca_E$ \[ \cala_E=T(E)/[R], \] where the prealgebra $(E,R,\varphi$) is also simply denoted by $E$ when no confusion arises. \end{enumerate} The enveloping algebra $\fraca_E$ is a filtered algebras as explained before but it is also an augmented algebra with augmentation \[ \varepsilon:\fraca_E\rightarrow \mathbb K \] induced by the canonical projection of $T(E)$ onto $T^0(E)=\mathbb K$. One has the surjective homomorphism \[ can:\cala_E\rightarrow \gr(\fraca_E) \] of graded algebras.\\ In the following we shall be mainly interested on prealgebras such that their enveloping algebras are quadratic-linear, i.e. satisfy Condition (ii) of Theorem \ref{PBW}, (Condition (i) being satisfied by construction). If $(E,R,\varphi)$ is such a prealgebra, to $\fraca_E$ corresponds the differential quadratic algebra $(\cala^!_E,\delta)$ (as in Section 3) where $\delta$ is the antiderivation extension of minus the transposed $\varphi^t$ of $\varphi$.\\ Notice that if $\fraca_E$ has the PBW property one has \[ E=F^1(\fraca_E)\cap \ker(\varepsilon) \] so that the canonical mapping of the prealgebra $E$ into its enveloping algebra $\fraca_E$ is then an injection. \subsection{Lie prealgebras} A prealgebra $(E,R,\varphi)$ will be called a {\sl Lie prealgebra} \cite{mdv-lan:2011} if the following conditions (1) and (2) are satisfied :\\ (1) The quadratic algebra $\cala_E=A(E,R)$ is Koszul of finite global dimension and is Gorenstein (Poincaré duality). \\ (2) The enveloping algebra $\fraca_E$ has the PBW property.\\ If $E=(E,R,\varphi)$ is a Lie prealgebra then $\fraca_E$ is a Koszul quadratic linear algebra, so to $(E,R,\varphi)$ one can associate the differential quadratic algebra $(\cala^!_E,\delta)$ and one has the following theorem \cite{mdv-lan:2011}: \begin{theorem}\label{LPD} The correspondence $(E,R,\varphi)\mapsto (\cala^!_E,\delta)$ defines an anti-isomorphism between the category of Lie prealgebra and the category of differential quadratic Koszul Frobenius algebras. \end{theorem} This is a direct consequence of Theorem \ref{POS} and of the Koszul Gorenstein property of $\cala_E$ by using \cite{smi:1996}.\\ Let us remind that a {\sl Frobenius algebra} is a finite-dimensional algebra $\cala$ such that as left $\cala$-modules $\cala$ and its vector space dual $\cala^\ast$ are isomorphic (the left $\cala$-module structure of $\cala^\ast$ being induced by the right $\cala$-module structure of $\cala$). Concerning the graded connected case one has the following classical useful result. \begin{proposition}\label{GF} Let $\cala=\oplus_{m\geq 0} \cala_m$ be a finite-dimensional graded connected algebra with $\cala_D\not=0$ and $\cala_n=0$ for $n>D$. Then the following conditions $\mathrm{(i)}$ and $\mathrm{(ii)}$ are equivalent :\\ $\mathrm{(i)}$ $\cala$ is Frobenius,\\ $\mathrm{(ii)}$ $\dim(\cala_D)=1$ and $(x,y)\mapsto (xy)_D$ is nondegenerate, where $(z)_D$ denotes the component on $\cala_D$ of $z\in \cala$. \end{proposition} \subsection{Examples and counterexamples} \begin{enumerate} \item {\sl Lie algebras.} It is clear that a Lie algebra $\fracg$ is canonically a Lie prealgebra $(\fracg, R,\varphi)$ with $R=\wedge^2\fracg\subset \fracg\otimes \fracg,\> \varphi=[\bullet, \bullet]$, $\fraca_\fracg=U(\fracg)$ and $\cala_\fracg=S\fracg$,(see Example 1 in 3.3). \item {\sl Associative algebras are not Lie prealgebras.} An associative algebra $A$ is clearly a prealgebra $(A,A\otimes A,m)$ with enveloping algebra $\fraca_A=\tilde A$ as in Example 2 of 3.3 but $\cala_A=T(A^\ast)^!=\mathbb K\bbbone\oplus A$ is not Gorenstein although it is Koszul as well as $\fraca_A=\tilde A$, (see the discussion of Example 2 in 3.3). The missing item is here the Poincaré duality. \item {\sl A deformed version of Lie algebras.} The algebra $\fraca$ of Example 3 of 3.3 is the enveloping algebra of a Lie prealgebra $(E,R,\varphi)$ with $E=\mathbb K^3$, $R\subset E\otimes E$ generated by\\ $r_1=\mu^2\nabla_2\otimes \nabla_0-\nabla_0\otimes \nabla_2$\\ $r_0=\mu^4\nabla_1\otimes \nabla_0-\nabla_0\otimes \nabla_1$\\ $r_2=\mu^4\nabla_2\otimes \nabla_1-\nabla_1\otimes \nabla_2$\\ and $\varphi$ given by \[ \varphi(r_1)=\mu\nabla_1,\>\>\>\> \varphi(r_0)=\mu^2(1+\mu^2)\nabla_0,\>\> \>\>\varphi(r_2)=\mu^2(1+\mu^2)\nabla_2 \] where $(\nabla_0,\nabla_1,\nabla_2)$ is the canonical basis of $E$. \item {\sl Differential calculi on quantum groups.} More generally most differential calculi on the quantum groups can be obtained via the duality of Theorem 6 from Lie prealgebras. In fact the Frobenius property is generally straightforward to verify, what is less obvious to prove is the Koszul property. \end{enumerate} \subsection{Generalization of Chevalley-Eilenberg complexes} Throughout this subsection, $E=(E,R,\varphi)$ is a fixed Lie prealgebra, its enveloping algebra is simply denoted by $\fraca$ with quadratic part denoted by $\cala$ and the associated differential quadratic Koszul Frobenius algebra is $(\cala^!,\delta)$.\\ A {\sl left representation} of the Lie prealgebra $E=(E,R,\varphi)$ is a left $\fraca$-module. Let $V$ be a left representation of $E=(E,R,\varphi)$, let $(x^\lambda)$ be a basis of $E$ with dual basis $(\omega_\lambda)$ of $E^\ast=\cala^!_1$. One has \[ x^\mu x^\nu \Phi \otimes \omega_\mu\omega_\nu+ x^\lambda \Phi\otimes \delta \omega_\lambda=0 \] for any $\Phi\in V$. This implies that one defines a differential of degree 1 on $V\otimes \cala^!$ by setting \[ \delta_V(\Phi\otimes \alpha)=x^\lambda \Phi \otimes \omega_\lambda\alpha + \Phi \otimes \delta \alpha \] so $(V\otimes \cala^!,\delta_V)$ is a cochain complex. These cochain complexes generalize the Chevalley-Eilenberg cochain complexes. Given a right representation of $E$, that is a right $\fraca$-module $W$, one defines similarily the chain complex $(W\otimes \cala^{!\ast},\delta_W)$, remembering that $\cala^{!\ast}$ is a graded coalgebra.\\ One has the isomorphisms \[ \left\{ \begin{array}{l} H^\bullet (V\otimes \cala^!) \simeq \Ext^\bullet_\fraca(\mathbb K, V)\\ H_\bullet(W\otimes \cala^{!\ast})\simeq \Tor^\fraca_\bullet (W,\mathbb K) \end{array} \right. \] which implies that one has the same relation with the Hochschild cohomology and the Hochschild homology of $\fraca$ as the relation of the (co-)homology of a Lie algebra with the Hochschild (co-)homology of its universal enveloping algebra. \section{Conclusion} As pointed out before many results of this article, in particular all the results of Section 2 generalize to the case of relations of degree $N\geq 2$ (instead of 2) \cite{mdv:2007}. Furthermore the results of \cite{mdv:2007} have been generalized in \cite{boc-sch-wem:2010} to the quiver case. In fact the analysis of \cite{ber-gin:2006} suggests that one can generalize most points by replacing the ground field $\mathbb K$ by a von Neumann regular ring $\mathbf R$ and replacing the tensor algebras of vector spaces by tensor algebra over $\mathbf R$ of $\mathbf R$-bimodules. The case of quiver being $\mathbf R=\mathbb K^{vertex}$.\\ A ring $\mathbf R$ is said to be {\sl von Neumann regular} whenever for any $a\in \mathbf R$ there is an $x\in \mathbf R$ such that $axa=a$. Semisimple rings are von Neumann regular. An infinite product of fields and the endomorphism ring $\End(E)$ of an infinite-dimensional vector space $E$ are examples of von Neumann regular rings which are not semisimple rings \cite{wei:1994}, \cite{ber-gin:2006}.\\ Concerning the nonhomogeneous case, it is worth noticing here that for nonhomogeneous relations of degree $N\geq 3$ one has no satisfying generalization of the Koszul duality of Positselski for the moment. \newpage
0.001224
\begin{document} \begin{frontmatter} \title{Discrete-Time Fractional Variational Problems} \author[labelNuno]{Nuno R. O. Bastos} \ead{nbastos@estv.ipv.pt} \author[labelRui]{Rui A. C. Ferreira} \ead{ruiacferreira@ua.pt} \author[labelDelfim]{Delfim F. M. Torres} \ead{delfim@ua.pt} \address[labelNuno]{Department of Mathematics, ESTGV\\ Polytechnic Institute of Viseu\\ 3504-510 Viseu, Portugal} \address[labelRui]{Faculty of Engineering and Natural Sciences\\ Lusophone University of Humanities and Technologies\\ 1749-024 Lisbon, Portugal} \address[labelDelfim]{Department of Mathematics\\ University of Aveiro\\ 3810-193 Aveiro, Portugal} \begin{abstract} We introduce a discrete-time fractional calculus of variations on the time scale $h\mathbb{Z}$, $h > 0$. First and second order necessary optimality conditions are established. Examples illustrating the use of the new Euler-Lagrange and Legendre type conditions are given. They show that solutions to the considered fractional problems become the classical discrete-time solutions when the fractional order of the discrete-derivatives are integer values, and that they converge to the fractional continuous-time solutions when $h$ tends to zero. Our Legendre type condition is useful to eliminate false candidates identified via the Euler-Lagrange fractional equation. \end{abstract} \begin{keyword} Fractional difference calculus \sep calculus of variations \sep fractional summation by parts \sep Euler-Lagrange equation \sep natural boundary conditions \sep Legendre necessary condition \sep time scale $h\mathbb{Z}$. \MSC[2010] 26A33 \sep 39A12 \sep 49K05. \end{keyword} \end{frontmatter} \section{Introduction} \label{int} The \emph{Fractional Calculus} (calculus with derivatives of arbitrary order) is an important research field in several different areas such as physics (including classical and quantum mechanics as well as thermodynamics), chemistry, biology, economics, and control theory \cite{agr3,B:08,Miller1,Ortigueira,TenreiroMachado}. It has its origin more than 300 years ago when L'Hopital asked Leibniz what should be the meaning of a derivative of non-integer order. After that episode several more famous mathematicians contributed to the development of Fractional Calculus: Abel, Fourier, Liouville, Riemann, Riesz, just to mention a few names \cite{book:Kilbas,Samko}. In the last decades, considerable research has been done in fractional calculus. This is particularly true in the area of the calculus of variations, which is being subject to intense investigations during the last few years \cite{B:08,Ozlem,R:N:H:M:B:07,R:T:M:B:07}. Applications include fractional variational principles in mechanics and physics, quantization, control theory, and description of conservative, nonconservative, and constrained systems \cite{B:08,B:M:08,B:M:R:08,R:T:M:B:07}. Roughly speaking, the classical calculus of variations and optimal control is extended by substituting the usual derivatives of integer order by different kinds of fractional (non-integer) derivatives. It is important to note that the passage from the integer/classical differential calculus to the fractional one is not unique because we have at our disposal different notions of fractional derivatives. This is, as argued in \cite{B:08,R:N:H:M:B:07}, an interesting and advantage feature of the area. Most part of investigations in the fractional variational calculus are based on the replacement of the classical derivatives by fractional derivatives in the sense of Riemann--Liouville, Caputo, Riesz, and Jumarie \cite{agr0,MyID:182,B:08,MyID:149}. Independently of the chosen fractional derivatives, one obtains, when the fractional order of differentiation tends to an integer order, the usual problems and results of the calculus of variations. Although the fractional Euler--Lagrange equations are obtained in a similar manner as in the standard variational calculus \cite{R:N:H:M:B:07}, some classical results are extremely difficult to be proved in a fractional context. This explains, for example, why a fractional Legendre type condition is absent from the literature of fractional variational calculus. In this work we give a first result in this direction (\textrm{cf.} Theorem~\ref{thm1}). Despite its importance in applications, less is known for discrete-time fractional systems \cite{R:N:H:M:B:07}. In \cite{Miller} Miller and Ross define a fractional sum of order $\nu>0$ \emph{via} the solution of a linear difference equation. They introduce it as (see \S\ref{sec0} for the notations used here) \begin{equation} \label{naosei8} \Delta^{-\nu}f(t)=\frac{1}{\Gamma(\nu)}\sum_{s=a}^{t-\nu}(t-\sigma(s))^{(\nu-1)}f(s). \end{equation} Definition \eqref{naosei8} is analogous to the Riemann-Liouville fractional integral $$ _a\mathbf{D}_x^{-\nu}f(x)=\frac{1}{\Gamma(\nu)}\int_{a}^{x}(x-s)^{\nu-1}f(s)ds $$ of order $\nu>0$, which can be obtained \emph{via} the solution of a linear differential equation \cite{Miller,Miller1}. Basic properties of the operator $\Delta^{-\nu}$ in (\ref{naosei8}) were obtained in \cite{Miller}. More recently, Atici and Eloe introduced the fractional difference of order $\alpha > 0$ by $\Delta^\alpha f(t)=\Delta^m(\Delta^{\alpha - m}f(t))$, where $m$ is the integer part of $\alpha$, and developed some of its properties that allow to obtain solutions of certain fractional difference equations \cite{Atici0,Atici}. The fractional differential calculus has been widely developed in the past few decades due mainly to its demonstrated applications in various fields of science and engineering \cite{book:Kilbas,Miller1,Podlubny}. The study of necessary optimality conditions for fractional problems of the calculus of variations and optimal control is a fairly recent issue attracting an increasing attention -- see \cite{agr0,agr2,RicDel,El-Nabulsi1,El-Nabulsi2,gastao:delfim,gasta1,M:Baleanu} and references therein -- but available results address only the continuous-time case. It is well known that discrete analogues of differential equations can be very useful in applications \cite{B:J:06,J:B:07,book:DCV} and that fractional Euler-Lagrange differential equations are extremely difficult to solve, being necessary to discretize them \cite{agr2,Ozlem}. Therefore, it is pertinent to develop a fractional discrete-time theory of the calculus of variations for the time scale $(h\mathbb{Z})_a$, $h > 0$ (\textrm{cf.} definitions in Section~\ref{sec0}). Computer simulations show that this time scale is particularly interesting because when $h$ tends to zero one recovers previous fractional continuous-time results. Our objective is two-fold. On one hand we proceed to develop the theory of \emph{fractional difference calculus}, namely, we introduce the concept of left and right fractional sum/difference (\textrm{cf.} Definition~\ref{def0}). On the other hand, we believe that the present work will potentiate research not only in the fractional calculus of variations but also in solving fractional difference equations, specifically, fractional equations in which left and right fractional differences appear. Because the theory of fractional difference calculus is still in its infancy \cite{Atici0,Atici,Miller}, the paper is self contained. In \S\ref{sec0} we introduce notations, we give necessary definitions, and prove some preliminary results needed in the sequel. Main results of the paper appear in \S\ref{sec1}: we prove a fractional formula of $h$-summation by parts (Theorem~\ref{teor1}), and necessary optimality conditions of first and second order (Theorems~\ref{thm0} and \ref{thm1}, respectively) for the proposed $h$-fractional problem of the calculus of variations \eqref{naosei7}. Section~\ref{sec2} gives some illustrative examples, and we end the paper with \S\ref{sec:conc} of conclusions and future perspectives. The results of the paper are formulated using standard notations of the theory of time scales \cite{livro:2001,J:B:M:08,malina}. It remains an interesting open question how to generalize the present results to an arbitrary time scale $\mathbb{T}$. This is a difficult and challenging problem since our proofs deeply rely on the fact that in $\mathbb{T} = (h\mathbb{Z})_a$ the graininess function is a constant. \section{Preliminaries} \label{sec0} We begin by recalling the main definitions and properties of time scales (\textrm{cf.}~\cite{CD:Bohner:2004,livro:2001} and references therein). A nonempty closed subset of $\mathbb{R}$ is called a \emph{time scale} and is denoted by $\mathbb{T}$. The \emph{forward jump operator} $\sigma:\mathbb{T}\rightarrow\mathbb{T}$ is defined by $\sigma(t)=\inf{\{s\in\mathbb{T}:s>t}\}$ for all $t\in\mathbb{T}$, while the \emph{backward jump operator} $\rho:\mathbb{T}\rightarrow\mathbb{T}$ is defined by $\rho(t)=\sup{\{s\in\mathbb{T}:s<t\}}$ for all $t\in\mathbb{T}$, with $\inf\emptyset=\sup\mathbb{T}$ (\textrm{i.e.}, $\sigma(M)=M$ if $\mathbb{T}$ has a maximum $M$) and $\sup\emptyset=\inf\mathbb{T}$ (\textrm{i.e.}, $\rho(m)=m$ if $\mathbb{T}$ has a minimum $m$). A point $t\in\mathbb{T}$ is called \emph{right-dense}, \emph{right-scattered}, \emph{left-dense}, or \emph{left-scattered}, if $\sigma(t)=t$, $\sigma(t)>t$, $\rho(t)=t$, or $\rho(t)<t$, respectively. Throughout the text we let $\mathbb{T}=[a,b]\cap\tilde{\mathbb{T}}$ with $a<b$ and $\tilde{\mathbb{T}}$ a time scale. We define $\mathbb{T}^\kappa=\mathbb{T}\backslash(\rho(b),b]$, $\mathbb{T}^{\kappa^2}=\left(\mathbb{T}^\kappa\right)^\kappa$ and more generally $\mathbb{T}^{\kappa^n} =\left(\mathbb{T}^{\kappa^{n-1}}\right)^\kappa$, for $n\in\mathbb{N}$. The following standard notation is used for $\sigma$ (and $\rho$): $\sigma^0(t) = t$, $\sigma^n(t) = (\sigma \circ \sigma^{n-1})(t)$, $n \in \mathbb{N}$. The \emph{graininess function} $\mu:\mathbb{T}\rightarrow[0,\infty)$ is defined by $\mu(t)=\sigma(t)-t$ for all $t\in\mathbb{T}$. A function $f:\mathbb{T}\rightarrow\mathbb{R}$ is said to be \emph{delta differentiable} at $t\in\mathbb{T}^\kappa$ if there is a number $f^{\Delta}(t)$ such that for all $\varepsilon>0$ there exists a neighborhood $U$ of $t$ (\textrm{i.e.}, $U=(t-\delta,t+\delta)\cap\mathbb{T}$ for some $\delta>0$) such that $$|f(\sigma(t))-f(s)-f^{\Delta}(t)(\sigma(t)-s)| \leq\varepsilon|\sigma(t)-s|,\mbox{ for all $s\in U$}.$$ We call $f^{\Delta}(t)$ the \emph{delta derivative} of $f$ at $t$. The $r^{th}-$\emph{delta derivative} ($r\in\mathbb{N}$) of $f$ is defined to be the function $f^{\Delta^r}:\mathbb{T}^{\kappa^r}\rightarrow\mathbb{R}$, provided $f^{\Delta^{r-1}}$ is delta differentiable on $\mathbb{T}^{\kappa^{r-1}}$. For delta differentiable $f$ and $g$ and for an arbitrary time scale $\mathbb{T}$ the next formulas hold: $f^\sigma(t) = f(t)+\mu(t)f^\Delta(t)$ and \begin{equation} \label{deltaderpartes2} (fg)^\Delta(t) = f^\Delta(t)g^\sigma(t)+f(t)g^\Delta(t) =f^\Delta(t)g(t)+f^\sigma(t)g^\Delta(t), \end{equation} where we abbreviate $f\circ\sigma$ by $f^\sigma$. A function $f:\mathbb{T}\rightarrow\mathbb{R}$ is called \emph{rd-continuous} if it is continuous at right-dense points and if its left-sided limit exists at left-dense points. The set of all rd-continuous functions is denoted by $C_{\textrm{rd}}$ and the set of all delta differentiable functions with rd-continuous derivative by $C_{\textrm{rd}}^1$. It is known that rd-continuous functions possess an \emph{antiderivative}, \textrm{i.e.}, there exists a function $F \in C_{\textrm{rd}}^1$ with $F^\Delta=f$. The delta \emph{integral} is then defined by $\int_{a}^{b}f(t)\Delta t=F(b)-F(a)$. It satisfies the equality $\int_t^{\sigma(t)}f(\tau)\Delta\tau=\mu(t)f(t)$. We make use of the following properties of the delta integral: \begin{lemma}(\textrm{cf.} \cite[Theorem~1.77]{livro:2001}) \label{integracao:partes} If $a,b\in\mathbb{T}$ and $f,g\in$C$_{\textrm{rd}}$, then \begin{enumerate} \item$\int_{a}^{b}f(\sigma(t))g^{\Delta}(t)\Delta t =\left.(fg)(t)\right|_{t=a}^{t=b}-\int_{a}^{b}f^{\Delta}(t)g(t)\Delta t$; \item $\int_{a}^{b}f(t)g^{\Delta}(t)\Delta t =\left.(fg)(t)\right|_{t=a}^{t=b}-\int_{a}^{b}f^{\Delta}(t)g(\sigma(t))\Delta t$. \end{enumerate} \end{lemma} One way to approach the Riemann-Liouville fractional calculus is through the theory of linear differential equations \cite{Podlubny}. Miller and Ross \cite{Miller} use an analogous methodology to introduce fractional discrete operators for the case $\mathbb{T}=\mathbb{Z}_a=\{a,a+1,a+2,\ldots\}, a\in\mathbb{R}$. Here we go a step further: we use the theory of time scales in order to introduce fractional discrete operators to the more general case $\mathbb{T}=(h\mathbb{Z})_a=\{a,a+h,a+2h,\ldots\}$, $a\in\mathbb{R}$, $h>0$. For $n\in \mathbb{N}_0$ and rd-continuous functions $p_i:\mathbb{T}\rightarrow \mathbb{R}$, $1\leq i\leq n$, let us consider the $n$th order linear dynamic equation \begin{equation} \label{linearDiffequa} Ly=0\, , \quad \text{ where } Ly=y^{\Delta^n}+\sum_{i=1}^n p_iy^{\Delta^{n-i}} \, . \end{equation} A function $y:\mathbb{T}\rightarrow \mathbb{R}$ is said to be a solution of equation (\ref{linearDiffequa}) on $\mathbb{T}$ provided $y$ is $n$ times delta differentiable on $\mathbb{T}^{\kappa^n}$ and satisfies $Ly(t)=0$ for all $t\in\mathbb{T}^{\kappa^n}$. \begin{lemma}\cite[p.~239]{livro:2001} \label{8:88 Bohner} If $z=\left(z_1, \ldots,z_n\right) : \mathbb{T} \rightarrow \mathbb{R}^n$ satisfies for all $t\in \mathbb{T}^\kappa$ \begin{equation}\label{5.86} z^\Delta=A(t)z(t),\qquad\mbox{where}\qquad A=\left( \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ \vdots & 0 & 1 & \ddots & \vdots \\ \vdots & & \ddots & \ddots & 0 \\ 0 & \ldots & \ldots & 0 & 1 \\ -p_n & \ldots & \ldots & -p_2 & -p_1 \\ \end{array} \right) \end{equation} then $y=z_1$ is a solution of equation \eqref{linearDiffequa}. Conversely, if $y$ solves \eqref{linearDiffequa} on $\mathbb{T}$, then $z=\left(y, y^\Delta,\ldots, y^{\Delta^{n-1}}\right) : \mathbb{T}\rightarrow \mathbb{R}$ satisfies \eqref{5.86} for all $t\in\mathbb{T}^{\kappa^n}$ \end{lemma} \begin{definition}\cite[p.~239]{livro:2001} We say that equation (\ref{linearDiffequa}) is \emph{regressive} provided $I + \mu(t) A(t)$ is invertible for all $t \in \mathbb{T}^\kappa$, where $A$ is the matrix in \eqref{5.86}. \end{definition} \begin{definition}\cite[p.~250]{livro:2001} We define the Cauchy function $y:\mathbb{T} \times \mathbb{T}^{\kappa^n}\rightarrow \mathbb{R}$ for the linear dynamic equation~(\ref{linearDiffequa}) to be, for each fixed $s\in\mathbb{T}^{\kappa^n}$, the solution of the initial value problem \begin{equation} \label{IVP} Ly=0,\quad y^{\Delta^i}\left((\sigma(s),s\right)=0,\quad 0\leq i \leq n-2,\quad y^{\Delta^{n-1}}\left((\sigma(s),s\right)=1\, . \end{equation} \end{definition} \begin{theorem}\cite[p.~251]{livro:2001}\label{eqsol} Suppose $\{y_1,\ldots,y_n\}$ is a fundamental system of the regressive equation~(\ref{linearDiffequa}). Let $f\in C_{rd}$. Then the solution of the initial value problem $$ Ly=f(t),\quad y^{\Delta^i}(t_0)=0,\quad 0\leq i\leq n-1 \, , $$ is given by $y(t)=\int_{t_0}^t y(t,s)f(s)\Delta s$, where $y(t,s)$ is the Cauchy function for ~(\ref{linearDiffequa}). \end{theorem} It is known that $y(t,s):=H_{n-1}(t,\sigma(s))$ is the Cauchy function for $y^{\Delta^n}=0$, where $H_{n-1}$ is a time scale generalized polynomial \cite[Example~5.115]{livro:2001}. The generalized polynomials $H_{k}$ are the functions $H_k:\mathbb{T}^2\rightarrow \mathbb{R}$, $k\in \mathbb{N}_0$, defined recursively as follows: \begin{equation*} H_0(t,s)\equiv 1 \, , \quad H_{k+1}(t,s)=\int_s^t H_k(\tau,s)\Delta\tau \, , \quad k = 1, 2, \ldots \end{equation*} for all $s,t\in \mathbb{T}$. If we let $H_k^\Delta(t,s)$ denote, for each fixed $s$, the derivative of $H_k(t,s)$ with respect to $t$, then (\textrm{cf.} \cite[p.~38]{livro:2001}) $$ H_k^\Delta(t,s)=H_{k-1}(t,s)\quad \text{for } k\in \mathbb{N}, \ t\in \mathbb{T}^\kappa \, . $$ From now on we restrict ourselves to the time scale $\mathbb{T}=(h\mathbb{Z})_a$, $h > 0$, for which the graininess function is the constant $h$. Our main goal is to propose and develop a discrete-time fractional variational theory in $\mathbb{T}=(h\mathbb{Z})_a$. We borrow the notations from the recent calculus of variations on time scales \cite{CD:Bohner:2004,RD,J:B:M:08}. How to generalize our results to an arbitrary time scale $\mathbb{T}$, with the graininess function $\mu$ depending on time, is not clear and remains a challenging question. Let $a\in\mathbb{R}$ and $h>0$, $(h\mathbb{Z})_a=\{a,a+h,a+2h,\ldots\}$, and $b=a+kh$ for some $k\in\mathbb{N}$. We have $\sigma(t)=t+h$, $\rho(t)=t-h$, $\mu(t) \equiv h$, and we will frequently write $f^\sigma(t)=f(\sigma(t))$. We put $\mathbb{T}=[a,b]\cap (h\mathbb{Z})_a$, so that $\mathbb{T}^\kappa=[a,\rho(b)]\cap (h\mathbb{Z})_a$ and $\mathbb{T}^{\kappa^2}=[a,\rho^2(b)] \cap (h\mathbb{Z})_a$. The delta derivative coincides in this case with the forward $h$-difference: $\displaystyle{f^{\Delta}(t)=\frac{f^\sigma(t)-f(t)}{\mu(t)}}$. If $h=1$, then we have the usual discrete forward difference $\Delta f(t)$. The delta integral gives the $h$-sum (or $h$-integral) of $f$: $\displaystyle{\int_a^b f(t)\Delta t=\sum_{k=\frac{a}{h}}^{\frac{b}{h}-1}f(kh)h}$. If we have a function $f$ of two variables, $f(t,s)$, its partial forward $h$-differences will be denoted by $\Delta_{t,h}$ and $\Delta_{s,h}$, respectively. We will make use of the standard conventions $\sum_{t=c}^{c-1}f(t)=0$, $c\in\mathbb{Z}$, and $\prod_{i=0}^{-1}f(i)=1$. Often, \emph{left fractional delta integration} (resp., \emph{right fractional delta integration}) of order $\nu>0$ is denoted by $_a\Delta_t^{-\nu}f(t)$ (resp. $_t\Delta_b^{-\nu}f(t)$). Here, similarly as in Ross \textit{et. al.} \cite{Ross}, where the authors omit the subscript $t$ on the operator (the operator itself cannot depend on $t$), we write $_a\Delta_h^{-\nu}f(t)$ (resp. $_h\Delta_b^{-\nu}f(t)$). Before giving an explicit formula for the generalized polynomials $H_{k}$ on $h\mathbb{Z}$ we introduce the following definition: \begin{definition} For arbitrary $x,y\in\mathbb{R}$ the $h$-factorial function is defined by \begin{equation*} x_h^{(y)}:=h^y\frac{\Gamma(\frac{x}{h}+1)}{\Gamma(\frac{x}{h}+1-y)}\, , \end{equation*} where $\Gamma$ is the well-known Euler gamma function, and we use the convention that division at a pole yields zero. \end{definition} \begin{remark} For $h = 1$, and in accordance with the previous literature \eqref{naosei8}, we write $x^{(y)}$ to denote $x_h^{(y)}$. \end{remark} \begin{proposition} \label{prop:d} For the time-scale $\mathbb{T}=(h\mathbb{Z})_a$ one has \begin{equation} \label{hn} H_{k}(t,s):=\frac{(t-s)_h^{(k)}}{k!}\quad\mbox{for all}\quad s,t\in \mathbb{T} \text{ and } k\in \mathbb{N}_0 \, . \end{equation} \end{proposition} To prove \eqref{hn} we use the following technical lemma. Throughout the text the basic property $\Gamma(x+1)=x\Gamma(x)$ of the gamma function will be frequently used. \begin{lemma} \label{lem:tl} Let $s \in \mathbb{T}$. Then, for all $t \in \mathbb{T}^\kappa$ one has \begin{equation*} \Delta_{t,h} \left\{\frac{(t-s)_h^{(k+1)}}{(k+1)!}\right\} = \frac{(t-s)_h^{(k)}}{k!} \, . \end{equation*} \end{lemma} \begin{proof} The equality follows by direct computations: \begin{equation*} \begin{split} \Delta_{t,h} &\left\{\frac{(t-s)_h^{(k+1)}}{(k+1)!}\right\} =\frac{1}{h}\left\{\frac{(\sigma(t)-s)_h^{(k+1)}}{(k+1)!}-\frac{(t-s)_h^{(k+1)}}{(k+1)!}\right\}\\ &=\frac{h^{k+1}}{h(k+1)!}\left\{\frac{\Gamma((t+h-s)/h+1)}{\Gamma((t+h-s)/h+1-(k+1))}-\frac{\Gamma((t-s)/h+1)}{\Gamma((t-s)/h+1-(k+1))}\right\}\\ &=\frac{h^k}{(k+1)!}\left\{\frac{((t-s)/h+1)\Gamma((t-s)/h+1)}{((t-s)/h-k)\Gamma((t-s)/h-k)}-\frac{\Gamma((t-s)/h+1)}{\Gamma((t-s)/h-k)}\right\}\\ &=\frac{h^k}{k!}\left\{\frac{\Gamma((t-s)/h+1)}{\Gamma((t-s)/h+1-k)}\right\} =\frac{(t-s)_h^{(k)}}{k!} \, . \end{split} \end{equation*} \end{proof} \begin{proof}(of Proposition~\ref{prop:d}) We proceed by mathematical induction. For $k=0$ $$ H_0(t,s)=\frac{1}{0!}h^0\frac{\Gamma(\frac{t-s}{h}+1)}{\Gamma(\frac{t-s}{h}+1-0)} =\frac{\Gamma(\frac{t-s}{h}+1)}{\Gamma(\frac{t-s}{h}+1)}=1 \, . $$ Assume that (\ref{hn}) holds for $k$ replaced by $m$. Then by Lemma~\ref{lem:tl} \begin{eqnarray*} H_{m+1}(t,s) &=& \int_s^t H_m(\tau,s)\Delta\tau = \int_s^t \frac{(\tau-s)_h^{(m)}}{m!} \Delta\tau = \frac{(t-s)_h^{(m+1)}}{(m+1)!}, \end{eqnarray*} which is (\ref{hn}) with $k$ replaced by $m+1$. \end{proof} Let $y_1(t),\ldots,y_n(t)$ be $n$ linearly independent solutions of the linear homogeneous dynamic equation $y^{\Delta^n}=0$. From Theorem~\ref{eqsol} we know that the solution of \eqref{IVP} (with $L=\Delta^n$ and $t_0=a$) is \begin{equation*} y(t) = \Delta^{-n} f(t)=\int_a^t \frac{(t-\sigma(s))_h^{(n-1)}}{\Gamma(n)}f(s)\Delta s\\ =\frac{1}{\Gamma(n)}\sum_{k=a/h}^{t/h-1} (t-\sigma(kh))_h^{(n-1)} f(kh) h \, . \end{equation*} Since $y^{\Delta_i}(a)=0$, $i = 0,\ldots,n-1$, then we can write that \begin{equation} \label{eq:derDh:int} \begin{split} \Delta^{-n} f(t) &= \frac{1}{\Gamma(n)}\sum_{k=a/h}^{t/h-n} (t-\sigma(kh))_h^{(n-1)} f(kh) h \\ &= \frac{1}{\Gamma(n)}\int_a^{\sigma(t-nh)}(t-\sigma(s))_h^{(n-1)}f(s) \Delta s \, . \end{split} \end{equation} Note that function $t \rightarrow (\Delta^{-n} f)(t)$ is defined for $t=a+n h \mbox{ mod}(h)$ while function $t \rightarrow f(t)$ is defined for $t=a \mbox{ mod}(h)$. Extending \eqref{eq:derDh:int} to any positive real value $\nu$, and having as an analogy the continuous left and right fractional derivatives \cite{Miller1}, we define the left fractional $h$-sum and the right fractional $h$-sum as follows. We denote by $\mathcal{F}_\mathbb{T}$ the set of all real valued functions defined on a given time scale $\mathbb{T}$. \begin{definition} \label{def0} Let $a\in\mathbb{R}$, $h>0$, $b=a+kh$ with $k\in\mathbb{N}$, and put $\mathbb{T}=[a,b]\cap(h\mathbb{Z})_a$. Consider $f\in\mathcal{F}_\mathbb{T}$. The left and right fractional $h$-sum of order $\nu>0$ are, respectively, the operators $_a\Delta_h^{-\nu} : \mathcal{F}_\mathbb{T} \rightarrow \mathcal{F}_{\tilde{\mathbb{T}}_\nu^+}$ and $_h\Delta_b^{-\nu} : \mathcal{F}_\mathbb{T} \rightarrow \mathcal{F}_{\tilde{\mathbb{T}}_\nu^-}$, $\tilde{\mathbb{T}}_\nu^\pm = \{t \pm \nu h : t \in \mathbb{T}\}$, defined by \begin{equation*} \begin{split} _a\Delta_h^{-\nu}f(t) &= \frac{1}{\Gamma(\nu)}\int_{a}^{\sigma(t-\nu h)}(t-\sigma(s))_h^{(\nu-1)}f(s)\Delta s =\frac{1}{\Gamma(\nu)}\sum_{k=\frac{a}{h}}^{\frac{t}{h}-\nu}(t-\sigma(kh))_h^{(\nu-1)}f(kh)h\\ _h\Delta_b^{-\nu}f(t) &= \frac{1}{\Gamma(\nu)}\int_{t+\nu h}^{\sigma(b)}(s-\sigma(t))_h^{(\nu-1)}f(s)\Delta s =\frac{1}{\Gamma(\nu)}\sum_{k=\frac{t}{h}+\nu}^{\frac{b}{h}}(kh-\sigma(t))_h^{(\nu-1)}f(kh)h. \end{split} \end{equation*} \end{definition} \begin{remark} In Definition~\ref{def0} we are using summations with limits that are reals. For example, the summation that appears in the definition of operator $_a\Delta_h^{-\nu}$ has the following meaning: $$ \sum_{k = \frac{a}{h}}^{\frac{t}{h} - \nu} G(k) = G(a/h) + G(a/h+1) + G(a/h+2) + \cdots + G(t/h - \nu), $$ where $t \in \{ a + \nu h, a + h + \nu h , a + 2 h + \nu h , \ldots, \underbrace{a+kh}_b + \nu h\}$ with $k\in\mathbb{N}$. \end{remark} \begin{lemma} Let $\nu>0$ be an arbitrary positive real number. For any $t \in \mathbb{T}$ we have: (i) $\lim_{\nu\rightarrow 0}{_a}\Delta_h^{-\nu}f(t+\nu h)=f(t)$; (ii) $\lim_{\nu\rightarrow 0}{_h}\Delta_b^{-\nu}f(t-\nu h)=f(t)$. \end{lemma} \begin{proof} Since \begin{align*} {_a}\Delta_h^{-\nu}f(t+\nu h)&=\frac{1}{\Gamma(\nu)}\int_{a}^{\sigma(t)}(t+\nu h-\sigma(s))_h^{(\nu-1)}f(s)\Delta s\\ &=\frac{1}{\Gamma(\nu)}\sum_{k=\frac{a}{h}}^{\frac{t}{h}}(t+\nu h-\sigma(kh))_h^{(\nu-1)}f(kh)h\\ &=h^{\nu}f(t)+\frac{\nu}{\Gamma(\nu+1)}\sum_{k=\frac{a}{h}}^{\frac{\rho(t)}{h}}(t+\nu h-\sigma(kh))_h^{(\nu-1)}f(kh)h\, , \end{align*} it follows that $\lim_{\nu\rightarrow 0}{_a}\Delta_h^{-\nu}f(t+\nu h)=f(t)$. The proof of (ii) is similar. \end{proof} For any $t\in\mathbb{T}$ and for any $\nu\geq 0$ we define $_a\Delta_h^{0}f(t) := {_h}\Delta_b^{0}f(t) := f(t)$ and write \begin{equation} \label{seila1} \begin{gathered} {_a}\Delta_h^{-\nu}f(t+\nu h) = h^\nu f(t) +\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{t}(t+\nu h-\sigma(s))_h^{(\nu-1)}f(s)\Delta s\, , \\ {_h}\Delta_b^{-\nu}f(t)=h^\nu f(t-\nu h) + \frac{\nu}{\Gamma(\nu+1)}\int_{\sigma(t)}^{\sigma(b)}(s+\nu h-\sigma(t))_h^{(\nu-1)}f(s)\Delta s \, . \end{gathered} \end{equation} \begin{theorem}\label{thm2} Let $f\in\mathcal{F}_\mathbb{T}$ and $\nu\geq0$. For all $t\in\mathbb{T}^\kappa$ we have \begin{equation} \label{naosei1} {_a}\Delta_{h}^{-\nu} f^{\Delta}(t+\nu h)=(_a\Delta_h^{-\nu}f(t+\nu h))^{\Delta} -\frac{\nu}{\Gamma(\nu + 1)}(t+\nu h-a)_h^{(\nu-1)}f(a) \, . \end{equation} \end{theorem} To prove Theorem~\ref{thm2} we make use of a technical lemma: \begin{lemma} \label{lemma:tl} Let $t\in\mathbb{T}^\kappa$. The following equality holds for all $s\in\mathbb{T}^\kappa$: \begin{multline} \label{proddiff} \Delta_{s,h}\left((t+\nu h-s)_h^{(\nu-1)}f(s))\right)\\ =(t+\nu h-\sigma(s))_h^{(\nu-1)}f^{\Delta}(s) -(v-1)(t+\nu h-\sigma(s))_h^{(\nu-2)}f(s) \, . \end{multline} \end{lemma} \begin{proof} Direct calculations give the intended result: \begin{equation*} \begin{split} \Delta&_{s,h} \left((t+\nu h-s)_h^{(\nu-1)}f(s)\right)\\ &=\Delta_{s,h}\left((t+\nu h-s)_h^{(\nu-1)}\right)f(s)+\left(t+\nu h -\sigma(s)\right)_h^{(\nu-1)}f^{\Delta}(s)\\ &=\frac{f(s)}{h}\left[h^{\nu-1}\frac{\Gamma\left(\frac{t+\nu h-\sigma(s)}{h}+1\right)}{\Gamma\left(\frac{t+\nu h-\sigma(s)}{h}+1-(\nu-1)\right)}-h^{\nu-1}\frac{\Gamma\left(\frac{t+\nu h-s}{h}+1\right)}{\Gamma\left(\frac{t+\nu h-s}{h}+1-(\nu-1)\right)}\right]\\ &\qquad +\left(t+\nu h - \sigma(s)\right)_h^{(\nu-1)}f^{\Delta}(s)\\ &=f(s)\left[h^{\nu-2}\left[\frac{\Gamma(\frac{t+\nu h-s}{h})}{\Gamma(\frac{t-s}{h}+1)}-\frac{\Gamma(\frac{t+\nu h-s}{h}+1)}{\Gamma(\frac{t-s}{h}+2)}\right]\right]+\left(t+\nu h - \sigma(s)\right)_h^{(\nu-1)}f^{\Delta}(s)\\ &=f(s)h^{\nu-2}\frac{\Gamma(\frac{t+\nu h-s-h}{h}+1)}{\Gamma(\frac{t-s+\nu h-h}{h}+1-(\nu-2))}(-(\nu-1))+ \left(t+\nu h - \sigma(s)\right)_h^{(\nu-1)}f^{\Delta}(s)\\ &=-(\nu-1)(t+\nu h -\sigma(s))_h^{(\nu-2)}f(s)+\left(t+\nu h - \sigma(s)\right)_h^{(\nu-1)}f^{\Delta}(s) \, , \end{split} \end{equation*} where the first equality follows directly from \eqref{deltaderpartes2}. \end{proof} \begin{remark} Given an arbitrary $t\in\mathbb{T}^\kappa$ it is easy to prove, in a similar way as in the proof of Lemma~\ref{lemma:tl}, the following equality analogous to \eqref{proddiff}: for all $s\in\mathbb{T}^\kappa$ \begin{multline} \label{eq:semlhante} \Delta_{s,h}\left((s+\nu h-\sigma(t))_h^{(\nu-1)}f(s))\right)\\ =(\nu-1)(s+\nu h-\sigma(t))_h^{(\nu-2)}f^\sigma(s) + (s+\nu h-\sigma(t))_h^{(\nu-1)}f^{\Delta}(s) \, . \end{multline} \end{remark} \begin{proof}(of Theorem~\ref{thm2}) From Lemma~\ref{lemma:tl} we obtain that \begin{equation} \label{naosei} \begin{split} {_a}\Delta_{h}^{-\nu} & f^{\Delta}(t+\nu h) = h^\nu f^\Delta(t)+\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{t}(t+\nu h-\sigma(s))_h^{(\nu-1)}f^{\Delta}(s)\Delta s\\ &=h^\nu f^\Delta(t)+\frac{\nu}{\Gamma(\nu+1)}\left[(t+\nu h-s)_h^{(\nu-1)}f(s)\right]_{s=a}^{s=t}\\ &\qquad +\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{\sigma(t)}(\nu-1)(t+\nu h-\sigma(s))_h^{(\nu-2)} f(s)\Delta s\\ &=-\frac{\nu(t+\nu h-a)_h^{(\nu-1)}}{\Gamma(\nu+1)}f(a) +h^{\nu}f^\Delta(t)+\nu h^{\nu-1}f(t)\\ &\qquad +\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{t}(\nu-1)(t+\nu h-\sigma(s))_h^{(\nu-2)} f(s)\Delta s. \end{split} \end{equation} We now show that $(_a\Delta_h^{-\nu}f(t+\nu h))^{\Delta}$ equals \eqref{naosei}: \begin{equation*} \begin{split} (_a\Delta_h^{-\nu} & f(t+\nu h))^\Delta = \frac{1}{h}\left[h^\nu f(\sigma(t))+\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{\sigma(t)}(\sigma(t)+\nu h-\sigma(s))_h^{(\nu-1)} f(s)\Delta s\right.\\ &\qquad \left.-h^\nu f(t)-\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{t}(t+\nu h-\sigma(s))_h^{(\nu-1)} f(s)\Delta s\right]\\ &=h^\nu f^\Delta(t)+\frac{\nu}{h\Gamma(\nu+1)}\left[\int_{a}^{t}(\sigma(t)+\nu h-\sigma(s))_h^{(\nu-1)} f(s)\Delta s\right.\\ &\qquad\left.-\int_{a}^{t}(t+\nu h-\sigma(s))_h^{(\nu-1)} f(s)\Delta s\right]+h^{\nu-1}\nu f(t)\\ &=h^\nu f^\Delta(t)+\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{t}\Delta_{t,h}\left((t+\nu h -\sigma(s))_h^{(\nu-1)} \right)f(s)\Delta s+h^{\nu-1}\nu f(t)\\ &=h^\nu f^\Delta(t)+\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{t}(\nu-1)(t+\nu h-\sigma(s))_h^{(\nu-2)} f(s)\Delta s+\nu h^{\nu-1}f(t) \, . \end{split} \end{equation*} \end{proof} Follows the counterpart of Theorem~\ref{thm2} for the right fractional $h$-sum: \begin{theorem} \label{thm3} Let $f\in\mathcal{F}_\mathbb{T}$ and $\nu\geq 0$. For all $t\in\mathbb{T}^\kappa$ we have \begin{equation} \label{naosei12} {_h}\Delta_{\rho(b)}^{-\nu} f^{\Delta}(t-\nu h)=\frac{\nu}{\Gamma(\nu+1)}(b+\nu h-\sigma(t))_h^{(\nu-1)}f(b)+(_h\Delta_b^{-\nu}f(t-\nu h))^{\Delta} \, . \end{equation} \end{theorem} \begin{proof} From \eqref{eq:semlhante} we obtain from integration by parts (item 2 of Lemma~\ref{integracao:partes}) that \begin{equation} \label{naosei99} \begin{split} {_h}\Delta_{\rho(b)}^{-\nu} & f^{\Delta}(t-\nu h) =\frac{\nu(b+\nu h-\sigma(t))_h^{(\nu-1)}}{\Gamma(\nu+1)}f(b) + h^\nu f^\Delta(t) -\nu h^{\nu-1}f(\sigma(t))\\ &\qquad -\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma(t)}^{b}(\nu-1)(s+\nu h-\sigma(t))_h^{(\nu-2)} f^\sigma(s)\Delta s. \end{split} \end{equation} We show that $(_h\Delta_b^{-\nu}f(t-\nu h))^{\Delta}$ equals \eqref{naosei99}: \begin{equation*} \begin{split} (_h&\Delta_b^{-\nu} f(t-\nu h))^{\Delta}\\ &=h^{\nu}f^\Delta(t)+\frac{\nu}{h\Gamma(\nu+1)}\left[\int_{\sigma^2(t)}^{\sigma(b)}(s+\nu h-\sigma^2(t)))_h^{(\nu-1)} f(s)\Delta s\right.\\ &\qquad \left.-\int_{\sigma^2(t)}^{\sigma(b)}(s+\nu h-\sigma(t))_h^{(\nu-1)} f(s)\Delta s\right]-\nu h^{\nu-1} f(\sigma(t))\\ &=h^{\nu}f^\Delta(t)+\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma^2(t)}^{\sigma(b)}\Delta_{t,h}\left((s+\nu h-\sigma(t))_h^{(\nu-1)}\right) f(s)\Delta s-\nu h^{\nu-1} f(\sigma(t))\\ &=h^{\nu}f^\Delta(t)-\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma^2(t)}^{\sigma(b)}(\nu-1)(s+\nu h-\sigma^2(t))_h^{(\nu-2)} f(s)\Delta s-\nu h^{\nu-1} f(\sigma(t))\\ &=h^{\nu}f^\Delta(t)-\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma(t)}^{b}(\nu-1)(s+\nu h-\sigma(t))_h^{(\nu-2)} f(s)\Delta s-\nu h^{\nu-1} f(\sigma(t)). \end{split} \end{equation*} \end{proof} \begin{definition} \label{def1} Let $0<\alpha\leq 1$ and set $\gamma := 1-\alpha$. The \emph{left fractional difference} $_a\Delta_h^\alpha f(t)$ and the \emph{right fractional difference} $_h\Delta_b^\alpha f(t)$ of order $\alpha$ of a function $f\in\mathcal{F}_\mathbb{T}$ are defined as \begin{equation*} _a\Delta_h^\alpha f(t) := (_a\Delta_h^{-\gamma}f(t+\gamma h))^{\Delta}\ \text{ and } \ _h\Delta_b^\alpha f(t):=-(_h\Delta_b^{-\gamma}f(t-\gamma h))^{\Delta} \end{equation*} for all $t\in\mathbb{T}^\kappa$. \end{definition} \section{Main Results} \label{sec1} Our aim is to introduce the $h$-fractional problem of the calculus of variations and to prove corresponding necessary optimality conditions. In order to obtain an Euler-Lagrange type equation (\textrm{cf.} Theorem~\ref{thm0}) we first prove a fractional formula of $h$-summation by parts. \subsection{Fractional $h$-summation by parts} A big challenge was to discover a fractional $h$-summation by parts formula within the time scale setting. Indeed, there is no clue of what such a formula should be. We found it eventually, making use of the following lemma. \begin{lemma} \label{lem1} Let $f$ and $k$ be two functions defined on $\mathbb{T}^\kappa$ and $\mathbb{T}^{\kappa^2}$, respectively, and $g$ a function defined on $\mathbb{T}^\kappa\times\mathbb{T}^{\kappa^2}$. The following equality holds: \begin{equation*} \int_{a}^{b}f(t)\left[\int_{a}^{t}g(t,s)k(s)\Delta s\right]\Delta t=\int_{a}^{\rho(b)}k(t)\left[\int_{\sigma(t)}^{b}g(s,t)f(s)\Delta s\right]\Delta t \, . \end{equation*} \end{lemma} \begin{proof} Consider the matrices $R = \left[ f(a+h), f(a+2h), \cdots, f(b-h) \right]$, \begin{equation*} C_1 = \left[ \begin{array}{c} g(a+h,a)k(a) \\ g(a+2h,a)k(a)+g(a+2h,a+h)k(a+h) \\ \vdots \\ g(b-h,a)k(a)+g(b-h,a+h)k(a+h)+\cdots+ g(b-h,b-2h)k(b-2h) \end{array} \right] \end{equation*} \begin{gather*} C_2 = \left[ \begin{array}{c} g(a+h,a) \\ g(a+2h,a) \\ \vdots \\ g(b-h,a) \end{array} \right], \ \ C_3 = \left[ \begin{array}{c} 0 \\ g(a+2h,a+h) \\ \vdots \\ g(b-h,a+h) \end{array} \right], \ \ C_4 = \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ g(b-h,b-2h) \end{array} \right] . \end{gather*} Direct calculations show that \begin{equation*} \begin{split} \int_{a}^{b}&f(t)\left[\int_{a}^{t}g(t,s)k(s)\Delta s\right]\Delta t =h^2\sum_{i=a/h}^{b/h-1} f(ih)\sum_{j=a/h}^{i-1}g(ih,jh)k(jh) = h^2 R \cdot C_1\\ &=h^2 R \cdot \left[k(a) C_2 + k(a+h)C_3 +\cdots +k(b-2h) C_4 \right]\\ &=h^2\left[k(a)\sum_{j=a/h+1}^{b/h-1}g(jh,a)f(jh)+k(a+h)\sum_{j=a/h+2}^{b/h-1}g(jh,a+h)f(jh)\right.\\ &\left.\qquad \qquad +\cdots+k(b-2h)\sum_{j=b/h-1}^{b/h-1}g(jh,b-2h)f(jh)\right]\\ &=\sum_{i=a/h}^{b/h-2}k(ih)h\sum_{j=\sigma(ih)/h}^{b/h-1}g(jh,ih)f(jh) h =\int_a^{\rho(b)}k(t)\left[\int_{\sigma(t)}^b g(s,t)f(s)\Delta s\right]\Delta t. \end{split} \end{equation*} \end{proof} \begin{theorem}[fractional $h$-summation by parts]\label{teor1} Let $f$ and $g$ be real valued functions defined on $\mathbb{T}^\kappa$ and $\mathbb{T}$, respectively. Fix $0<\alpha\leq 1$ and put $\gamma := 1-\alpha$. Then, \begin{multline} \label{delf:sumPart} \int_{a}^{b}f(t)_a\Delta_h^\alpha g(t)\Delta t=h^\gamma f(\rho(b))g(b)-h^\gamma f(a)g(a)+\int_{a}^{\rho(b)}{_h\Delta_{\rho(b)}^\alpha f(t)g^\sigma(t)}\Delta t\\ +\frac{\gamma}{\Gamma(\gamma+1)}g(a)\left(\int_{a}^{b}(t+\gamma h-a)_h^{(\gamma-1)}f(t)\Delta t -\int_{\sigma(a)}^{b}(t+\gamma h-\sigma(a))_h^{(\gamma-1)}f(t)\Delta t\right). \end{multline} \end{theorem} \begin{proof} By \eqref{naosei1} we can write \begin{equation} \label{rui0} \begin{split} \int_{a}^{b} &f(t)_a\Delta_h^\alpha g(t)\Delta t =\int_{a}^{b}f(t)(_a\Delta_h^{-\gamma} g(t+\gamma h))^{\Delta}\Delta t\\ &=\int_{a}^{b}f(t)\left[_a\Delta_h^{-\gamma} g^{\Delta}(t+\gamma h)+\frac{\gamma}{\Gamma(\gamma+1)}(t+\gamma h-a)_h^{(\gamma-1)}g(a)\right]\Delta t\\ &=\int_{a}^{b}f(t)_a\Delta_h^{-\gamma}g^{\Delta}(t+\gamma h)\Delta t +\int_{a}^{b}\frac{\gamma}{\Gamma(\gamma+1)}(t+\gamma h-a)_h^{(\gamma-1)}f(t)g(a)\Delta t. \end{split} \end{equation} Using \eqref{seila1} we get \begin{equation*} \begin{split} \int_{a}^{b} &f(t)_a\Delta_h^{-\gamma} g^{\Delta}(t+\gamma h) \Delta t\\ &=\int_{a}^{b}f(t)\left[h^\gamma g^{\Delta}(t) + \frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t+\gamma h -\sigma(s))_h^{(\gamma-1)} g^{\Delta}(s)\Delta s\right]\Delta t\\ &=h^\gamma\int_{a}^{b}f(t)g^{\Delta}(t)\Delta t+\frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{\rho(b)} g^{\Delta}(t)\int_{\sigma(t)}^{b}(s+\gamma h-\sigma(t))_h^{(\gamma-1)}f(s)\Delta s \Delta t\\ &=h^\gamma f(\rho(b))[g(b)-g(\rho(b))]+\int_{a}^{\rho(b)} g^{\Delta}(t)_h\Delta_{\rho(b)}^{-\gamma} f(t-\gamma h)\Delta t, \end{split} \end{equation*} where the third equality follows by Lemma~\ref{lem1}. We proceed to develop the right hand side of the last equality as follows: \begin{equation*} \begin{split} h^\gamma & f(\rho(b))[g(b)-g(\rho(b))]+\int_{a}^{\rho(b)} g^{\Delta}(t)_h\Delta_{\rho(b)}^{-\gamma} f(t-\gamma h)\Delta t\\ &=h^\gamma f(\rho(b))[g(b)-g(\rho(b))] +\left[g(t)_h\Delta_{\rho(b)}^{-\gamma} f(t-\gamma h)\right]_{t=a}^{t=\rho(b)}\\ &\quad -\int_{a}^{\rho(b)} g^\sigma(t)(_h\Delta_{\rho(b)}^{-\gamma} f(t-\gamma h))^{\Delta}\Delta t\\ &=h^\gamma f(\rho(b))g(b)-h^\gamma f(a)g(a)\\ &\quad -\frac{\gamma}{\Gamma(\gamma+1)}g(a)\int_{\sigma(a)}^{b}(s+\gamma h-\sigma(a))_h^{(\gamma-1)}f(s)\Delta s +\int_{a}^{\rho(b)}{\left(_h\Delta_{\rho(b)}^\alpha f(t)\right)g^\sigma(t)}\Delta t, \end{split} \end{equation*} where the first equality follows from Lemma~\ref{integracao:partes}. Putting this into (\ref{rui0}) we get \eqref{delf:sumPart}. \end{proof} \subsection{Necessary optimality conditions} We begin to fix two arbitrary real numbers $\alpha$ and $\beta$ such that $\alpha,\beta\in(0,1]$. Further, we put $\gamma := 1-\alpha$ and $\nu :=1-\beta$. Let a function $L(t,u,v,w):\mathbb{T}^\kappa\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ be given. We consider the problem of minimizing (or maximizing) a functional $\mathcal{L}:\mathcal{F}_\mathbb{T}\rightarrow\mathbb{R}$ subject to given boundary conditions: \begin{equation} \label{naosei7} \mathcal{L}(y(\cdot))=\int_{a}^{b}L(t,y^{\sigma}(t),{_a}\Delta_h^\alpha y(t),{_h}\Delta_b^\beta y(t))\Delta t \longrightarrow \min, \ y(a)=A, \ y(b)=B \, . \end{equation} Our main aim is to derive necessary optimality conditions for problem \eqref{naosei7}. \begin{definition} For $f\in\mathcal{F}_\mathbb{T}$ we define the norm $$\|f\|=\max_{t\in\mathbb{T}^\kappa}|f^\sigma(t)|+\max_{t\in\mathbb{T}^\kappa}|_a\Delta_h^\alpha f(t)|+\max_{t\in\mathbb{T}^\kappa}|_h\Delta_b^\beta f(t)|.$$ A function $\hat{y}\in\mathcal{F}_\mathbb{T}$ with $\hat{y}(a)=A$ and $\hat{y}(b)=B$ is called a local minimum for problem \eqref{naosei7} provided there exists $\delta>0$ such that $\mathcal{L}(\hat{y})\leq\mathcal{L}(y)$ for all $y\in\mathcal{F}_\mathbb{T}$ with $y(a)=A$ and $y(b)=B$ and $\|y-\hat{y}\|<\delta$. \end{definition} \begin{definition} A function $\eta\in\mathcal{F}_\mathbb{T}$ is called an admissible variation provided $\eta \neq 0$ and $\eta(a)=\eta(b)=0$. \end{definition} From now on we assume that the second-order partial derivatives $L_{uu}$, $L_{uv}$, $L_{uw}$, $L_{vw}$, $L_{vv}$, and $L_{ww}$ exist and are continuous. \subsubsection{First order optimality condition} Next theorem gives a first order necessary condition for problem \eqref{naosei7}, \textrm{i.e.}, an Euler-Lagrange type equation for the fractional $h$-difference setting. \begin{theorem}[The $h$-fractional Euler-Lagrange equation for problem \eqref{naosei7}] \label{thm0} If $\hat{y}\in\mathcal{F}_\mathbb{T}$ is a local minimum for problem \eqref{naosei7}, then the equality \begin{equation} \label{EL} L_u[\hat{y}](t) +{_h}\Delta_{\rho(b)}^\alpha L_v[\hat{y}](t)+{_a}\Delta_h^\beta L_w[\hat{y}](t)=0 \end{equation} holds for all $t\in\mathbb{T}^{\kappa^2}$ with operator $[\cdot]$ defined by $[y](s) =(s,y^{\sigma}(s),{_a}\Delta_s^\alpha y(s),{_s}\Delta_b^\beta y(s))$. \end{theorem} \begin{proof} Suppose that $\hat{y}(\cdot)$ is a local minimum of $\mathcal{L}[\cdot]$. Let $\eta(\cdot)$ be an arbitrarily fixed admissible variation and define a function $\Phi:\left(-\frac{\delta}{\|\eta(\cdot)\|},\frac{\delta}{\|\eta(\cdot)\|}\right)\rightarrow\mathbb{R}$ by \begin{equation} \label{fi} \Phi(\varepsilon)=\mathcal{L}[\hat{y}(\cdot)+\varepsilon\eta(\cdot)]. \end{equation} This function has a minimum at $\varepsilon=0$, so we must have $\Phi'(0)=0$, i.e., $$\int_{a}^{b}\left[L_u[\hat{y}](t)\eta^\sigma(t) +L_v[\hat{y}](t){_a}\Delta_h^\alpha\eta(t) +L_w[\hat{y}](t){_h}\Delta_b^\beta\eta(t)\right]\Delta t=0,$$ which we may write equivalently as \begin{multline} \label{rui3} h L_u[\hat{y}](t)\eta^\sigma(t)|_{t=\rho(b)}+\int_{a}^{\rho(b)}L_u[\hat{y}](t)\eta^\sigma(t)\Delta t +\int_{a}^{b}L_v[\hat{y}](t){_a}\Delta_h^\alpha\eta(t)\Delta t\\+\int_{a}^{b}L_w[\hat{y}](t){_h}\Delta_b^\beta\eta(t)\Delta t=0. \end{multline} Using Theorem~\ref{teor1} and the fact that $\eta(a)=\eta(b)=0$, we get \begin{equation} \label{naosei5} \int_{a}^{b}L_v[\hat{y}](t){_a}\Delta_h^\alpha\eta(t)\Delta t=\int_{a}^{\rho(b)}\left({_h}\Delta_{\rho(b)}^\alpha \left(L_v[\hat{y}]\right)(t)\right)\eta^\sigma(t)\Delta t \end{equation} for the third term in \eqref{rui3}. Using \eqref{naosei12} it follows that \begin{equation} \label{naosei4} \begin{split} \int_{a}^{b} & L_w[\hat{y}](t){_h}\Delta_b^\beta\eta(t)\Delta t\\=&-\int_{a}^{b}L_w[\hat{y}](t)({_h}\Delta_b^{-\nu}\eta(t-\nu h))^{\Delta}\Delta t\\ =&-\int_{a}^{b}L_w[\hat{y}](t)\left[{_h}\Delta_{\rho(b)}^{-\nu} \eta^{\Delta}(t-\nu h)-\frac{\nu}{\Gamma(\nu+1)}(b+\nu h-\sigma(t))_h^{(\nu-1)}\eta(b)\right]\Delta t\\ =&-\int_{a}^{b}L_w[\hat{y}](t){_h}\Delta_{\rho(b)}^{-\nu} \eta^{\Delta}(t-\nu h)\Delta t +\frac{\nu\eta(b)}{\Gamma(\nu+1)}\int_{a}^{b}(b+\nu h-\sigma(t))_h^{(\nu-1)}L_w[\hat{y}](t)\Delta t . \end{split} \end{equation} We now use Lemma~\ref{lem1} to get \begin{equation} \label{naosei2} \begin{split} \int_{a}^{b} &L_w[\hat{y}](t){_h}\Delta_{\rho(b)}^{-\nu} \eta^{\Delta}(t-\nu h)\Delta t\\ &=\int_{a}^{b}L_w[\hat{y}](t)\left[h^\nu\eta^{\Delta}(t)+\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma(t)}^{b}(s+\nu h-\sigma(t))_h^{(\nu-1)} \eta^{\Delta}(s)\Delta s\right]\Delta t\\ &=\int_{a}^{b}h^\nu L_w[\hat{y}](t)\eta^{\Delta}(t)\Delta t\\ &\qquad +\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{\rho(b)}\left[L_w[\hat{y}](t)\int_{\sigma(t)}^{b}(s+\nu h-\sigma(t))_h^{(\nu-1)} \eta^{\Delta}(s)\Delta s\right]\Delta t\\ &=\int_{a}^{b}h^\nu L_w[\hat{y}](t)\eta^{\Delta}(t)\Delta t\\ &\qquad +\frac{\nu}{\Gamma(\nu+1)}\int_{a}^{b}\left[\eta^{\Delta}(t)\int_{a}^{t}(t+\nu h -\sigma(s))_h^{(\nu-1)}L_w[\hat{y}](s)\Delta s\right]\Delta t\\ &=\int_{a}^{b}\eta^{\Delta}(t){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)\Delta t. \end{split} \end{equation} We apply again the time scale integration by parts formula (Lemma~\ref{integracao:partes}), this time to \eqref{naosei2}, to obtain, \begin{equation} \label{naosei3} \begin{split} \int_{a}^{b} & \eta^{\Delta}(t){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)\Delta t\\ &=\int_{a}^{\rho(b)}\eta^{\Delta}(t){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)\Delta t\\ &\qquad +(\eta(b)-\eta(\rho(b))){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)|_{t=\rho(b)}\\ &=\left[\eta(t){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)\right]_{t=a}^{t=\rho(b)} -\int_{a}^{\rho(b)}\eta^\sigma(t)({_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h))^\Delta \Delta t\\ &\qquad +\eta(b){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)|_{t=\rho(b)}-\eta(\rho(b)){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)|_{t=\rho(b)}\\ &=\eta(b){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)|_{t=\rho(b)}-\eta(a){_a}\Delta^{-\nu}_h \left(L_w[\hat{y}]\right)(t+\nu h)|_{t=a}\\ &\qquad -\int_{a}^{\rho(b)}\eta^\sigma(t){_a}\Delta^{\beta}_h \left(L_w[\hat{y}]\right)(t)\Delta t. \end{split} \end{equation} Since $\eta(a)=\eta(b)=0$ we obtain, from \eqref{naosei2} and \eqref{naosei3}, that $$\int_{a}^{b}L_w[\hat{y}](t){_h}\Delta_{\rho(b)}^{-\nu} \eta^\Delta(t)\Delta t =-\int_{a}^{\rho(b)}\eta^\sigma(t){_a}\Delta^{\beta}_h \left(L_w[\hat{y}]\right)(t)\Delta t\, ,$$ and after inserting in \eqref{naosei4}, that \begin{equation} \label{naosei6} \int_{a}^{b}L_w[\hat{y}](t){_h}\Delta_b^\beta\eta(t)\Delta t =\int_{a}^{\rho(b)}\eta^\sigma(t){_a}\Delta^{\beta}_h \left(L_w[\hat{y}]\right)(t) \Delta t. \end{equation} By \eqref{naosei5} and \eqref{naosei6} we may write \eqref{rui3} as $$\int_{a}^{\rho(b)}\left[L_u[\hat{y}](t) +{_h}\Delta_{\rho(b)}^\alpha \left(L_v[\hat{y}]\right)(t)+{_a}\Delta_h^\beta \left(L_w[\hat{y}]\right)(t)\right]\eta^\sigma(t) \Delta t =0\, .$$ Since the values of $\eta^\sigma(t)$ are arbitrary for $t\in\mathbb{T}^{\kappa^2}$, the Euler-Lagrange equation \eqref{EL} holds along $\hat{y}$. \end{proof} The next result is a direct corollary of Theorem~\ref{thm0}. \begin{corollary}[The $h$-Euler-Lagrange equation -- \textrm{cf.}, \textrm{e.g.}, \cite{CD:Bohner:2004,RD}] \label{ELCor} Let $\mathbb{T}$ be the time scale $h \mathbb{Z}$, $h > 0$, with the forward jump operator $\sigma$ and the delta derivative $\Delta$. Assume $a, b \in \mathbb{T}$, $a < b$. If $\hat{y}$ is a solution to the problem \begin{equation*} \mathcal{L}(y(\cdot))=\int_{a}^{b}L(t,y^{\sigma}(t),y^\Delta(t))\Delta t \longrightarrow \min, \ y(a)=A, \ y(b)=B\, , \end{equation*} then the equality $L_u(t,\hat{y}^{\sigma}(t),\hat{y}^\Delta(t))-\left(L_v(t,\hat{y}^{\sigma}(t),\hat{y}^\Delta(t))\right)^\Delta =0$ holds for all $t\in\mathbb{T}^{\kappa^2}$. \end{corollary} \begin{proof} Choose $\alpha=1$ and a $L$ that does not depend on $w$ in Theorem~\ref{thm0}. \end{proof} \begin{remark} If we take $h=1$ in Corollary~\ref{ELCor} we have that $$L_u(t,\hat{y}^{\sigma}(t),\Delta\hat{y}(t))-\Delta L_v(t,\hat{y}^{\sigma}(t),\Delta\hat{y}(t)) =0$$ holds for all $t\in\mathbb{T}^{\kappa^2}$. This equation is usually called \emph{the discrete Euler-Lagrange equation}, and can be found, \textrm{e.g.}, in \cite[Chap.~8]{book:DCV}. \end{remark} \subsubsection{Natural boundary conditions} If the initial condition $y(a)=A$ is not present in problem \eqref{naosei7} (\textrm{i.e.}, $y(a)$ is free), besides the $h$-fractional Euler-Lagrange equation \eqref{EL} the following supplementary condition must be fulfilled: \begin{multline}\label{rui1} -h^\gamma L_v[\hat{y}](a)+\frac{\gamma}{\Gamma(\gamma+1)}\left( \int_{a}^{b}(t+\gamma h-a)_h^{(\gamma-1)}L_v[\hat{y}](t)\Delta t\right.\\ \left.-\int_{\sigma(a)}^{b}(t+\gamma h-\sigma(a))_h^{(\gamma-1)}L_v[\hat{y}](t)\Delta t\right)+ L_w[\hat{y}](a)=0. \end{multline} Similarly, if $y(b)=B$ is not present in \eqref{naosei7} ($y(b)$ is free), the extra condition \begin{multline}\label{rui2} h L_u[\hat{y}](\rho(b))+h^\gamma L_v[\hat{y}](\rho(b))-h^\nu L_w[\hat{y}](\rho(b))\\ +\frac{\nu}{\Gamma(\nu+1)}\left(\int_{a}^{b}(b+\nu h-\sigma(t))_h^{(\nu-1)}L_w[\hat{y}](t)\Delta t \right.\\ \left. -\int_{a}^{\rho(b)}(\rho(b)+\nu h-\sigma(t))_h^{(\nu-1)}L_w[\hat{y}](t)\Delta t\right)=0 \end{multline} is added to Theorem~\ref{thm0}. We leave the proof of the \emph{natural boundary conditions} \eqref{rui1} and \eqref{rui2} to the reader. We just note here that the first term in \eqref{rui2} arises from the first term of the left hand side of \eqref{rui3}. \subsubsection{Second order optimality condition} We now obtain a second order necessary condition for problem \eqref{naosei7}, \textrm{i.e.}, we prove a Legendre optimality type condition for the fractional $h$-difference setting. \begin{theorem}[The $h$-fractional Legendre necessary condition] \label{thm1} If $\hat{y}\in\mathcal{F}_\mathbb{T}$ is a local minimum for problem \eqref{naosei7}, then the inequality \begin{equation} \label{eq:LC} \begin{split} h^2 &L_{uu}[\hat{y}](t)+2h^{\gamma+1}L_{uv}[\hat{y}](t)+2h^{\nu+1}(\nu-1)L_{uw}[\hat{y}](t) +h^{2\gamma}(\gamma -1)^2 L_{vv}[\hat{y}](\sigma(t))\\ &+2h^{\nu+\gamma}(\gamma-1)L_{vw}[\hat{y}](\sigma(t))+2h^{\nu+\gamma}(\nu-1)L_{vw}[\hat{y}](t)+h^{2\nu}(\nu-1)^2 L_{ww}[\hat{y}](t)\\ &+h^{2\nu}L_{ww}[\hat{y}](\sigma(t)) +\int_{a}^{t}h^3L_{ww}[\hat{y}](s)\left(\frac{\nu(1-\nu)}{\Gamma(\nu+1)}(t+\nu h - \sigma(s))_h^{(\nu-2)}\right)^2\Delta s\\ &+h^{\gamma}L_{vv}[\hat{y}](t) +\int_{\sigma(\sigma(t))}^{b}h^3L_{vv}[\hat{y}](s)\left(\frac{\gamma(\gamma-1)}{\Gamma(\gamma+1)}(s+\gamma h -\sigma(\sigma(t)))_h^{(\gamma-2)}\right)^2\Delta s \geq 0 \end{split} \end{equation} holds for all $t\in\mathbb{T}^{\kappa^2}$, where $[\hat{y}](t)=(t,\hat{y}^{\sigma}(t),{_a}\Delta_t^\alpha \hat{y}(t),{_t}\Delta_b^\beta\hat{y}(t))$. \end{theorem} \begin{proof} By the hypothesis of the theorem, and letting $\Phi$ be as in \eqref{fi}, we have as necessary optimality condition that $\Phi''(0)\geq 0$ for an arbitrary admissible variation $\eta(\cdot)$. Inequality $\Phi''(0)\geq 0$ is equivalent to \begin{multline} \label{des1} \int_{a}^{b}\left[L_{uu}[\hat{y}](t)(\eta^\sigma(t))^2 +2L_{uv}[\hat{y}](t)\eta^\sigma(t){_a}\Delta_h^\alpha\eta(t) +2L_{uw}[\hat{y}](t)\eta^\sigma(t){_h}\Delta_b^\beta\eta(t)\right.\\ \left. +L_{vv}[\hat{y}](t)({_a}\Delta_h^\alpha\eta(t))^2 +2L_{vw}[\hat{y}](t){_a}\Delta_h^\alpha\eta(t){_h}\Delta_b^\beta\eta(t) +L_{ww}(t)({_h}\Delta_b^\beta\eta(t))^2\right]\Delta t\geq 0. \end{multline} Let $\tau\in\mathbb{T}^{\kappa^2}$ be arbitrary, and choose $\eta:\mathbb{T}\rightarrow\mathbb{R}$ given by $\eta(t) = \left\{ \begin{array}{ll} h & \mbox{if $t=\sigma(\tau)$};\\ 0 & \mbox{otherwise}.\end{array} \right.$ It follows that $\eta(a)=\eta(b)=0$, \textrm{i.e.}, $\eta$ is an admissible variation. Using \eqref{naosei1} we get \begin{equation*} \begin{split} \int_{a}^{b}&\left[L_{uu}[\hat{y}](t)(\eta^\sigma(t))^2 +2L_{uv}[\hat{y}](t)\eta^\sigma(t){_a}\Delta_h^\alpha\eta(t) +L_{vv}[\hat{y}](t)({_a}\Delta_h^\alpha\eta(t))^2\right]\Delta t\\ &=\int_{a}^{b}\Biggl[L_{uu}[\hat{y}](t)(\eta^\sigma(t))^2\\ &\qquad\quad +2L_{uv}[\hat{y}](t)\eta^\sigma(t)\left(h^\gamma \eta^{\Delta}(t)+ \frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t+\gamma h -\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)\\ &\qquad\quad +L_{vv}[\hat{y}](t)\left(h^\gamma \eta^{\Delta}(t) +\frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t+\gamma h -\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)^2\Biggr]\Delta t\\ &=h^3L_{uu}[\hat{y}](\tau)+2h^{\gamma+2}L_{uv}[\hat{y}](\tau)+h^{\gamma+1}L_{vv}[\hat{y}](\tau)\\ &\quad +\int_{\sigma(\tau)}^{b}L_{vv}[\hat{y}](t)\left(h^\gamma\eta^{\Delta}(t) +\frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t+\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)^2\Delta t. \end{split} \end{equation*} Observe that \begin{multline*} h^{2\gamma+1}(\gamma -1)^2 L_{vv}[\hat{y}](\sigma(\tau))\\ +\int_{\sigma^2(\tau)}^{b}L_{vv}[\hat{y}](t)\left(\frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t+\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)^2\Delta t\\ =\int_{\sigma(\tau)}^{b}L_{vv}[\hat{y}](t)\left(h^\gamma \eta^\Delta(t)+\frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t+\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)^2\Delta t. \end{multline*} Let $t\in[\sigma^2(\tau),\rho(b)]\cap h\mathbb{Z}$. Since \begin{equation} \label{rui10} \begin{split} \frac{\gamma}{\Gamma(\gamma+1)}&\int_{a}^{t}(t+\gamma h -\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\\ &= \frac{\gamma}{\Gamma(\gamma+1)}\left[\int_{a}^{\sigma(\tau)}(t+\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right.\\ &\qquad\qquad\qquad\qquad \left.+\int_{\sigma(\tau)}^{t}(t+\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right]\\ &=h\frac{\gamma}{\Gamma(\gamma+1)}\left[(t+\gamma h-\sigma(\tau))_h^{(\gamma-1)}-(t+\gamma h-\sigma(\sigma(\tau)))_h^{(\gamma-1)}\right]\\ &=\frac{\gamma h^\gamma}{\Gamma(\gamma+1)}\left[ \frac{\left(\frac{t-\tau}{h}+\gamma-1\right)\Gamma\left(\frac{t-\tau}{h}+\gamma-1\right) -\left(\frac{t-\tau}{h}\right)\Gamma\left(\frac{t-\tau}{h}+\gamma-1\right)} {\left(\frac{t-\tau}{h}\right)\Gamma\left(\frac{t-\tau}{h}\right)}\right]\\ &=h^{2}\frac{\gamma(\gamma-1)}{\Gamma(\gamma+1)}(t+\gamma h -\sigma(\sigma(\tau)))_h^{(\gamma-2)}, \end{split} \end{equation} we conclude that \begin{multline*} \int_{\sigma^2(\tau)}^{b}L_{vv}[\hat{y}](t)\left(\frac{\gamma}{\Gamma(\gamma+1)}\int_{a}^{t}(t +\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)^2\Delta t\\ =\int_{\sigma^2(\tau)}^{b}L_{vv}[\hat{y}](t)\left(h^2\frac{\gamma(\gamma-1)}{\Gamma(\gamma+1)}(t +\gamma h-\sigma^2(\tau))_h^{(\gamma-2)}\right)^2\Delta t. \end{multline*} Note that we can write ${_t}\Delta_b^\beta\eta(t)=-{_h}\Delta_{\rho(b)}^{-\nu} \eta^\Delta(t-\nu h)$ because $\eta(b)=0$. It is not difficult to see that the following equality holds: \begin{equation*} \begin{split} \int_{a}^{b}2L_{uw}[\hat{y}](t)\eta^\sigma(t){_h}\Delta_b^\beta\eta(t)\Delta t &=-\int_{a}^{b}2L_{uw}[\hat{y}](t)\eta^\sigma(t){_h}\Delta_{\rho(b)}^{-\nu} \eta^\Delta(t-\nu h)\Delta t\\ &=2h^{2+\nu}L_{uw}[\hat{y}](\tau)(\nu-1) \, . \end{split} \end{equation*} Moreover, \begin{equation*} \begin{split} \int_{a}^{b} &2L_{vw}[\hat{y}](t){_a}\Delta_h^\alpha\eta(t){_h}\Delta_b^\beta\eta(t)\Delta t\\ &=-2\int_{a}^{b}L_{vw}[\hat{y}](t)\left\{\left(h^\gamma\eta^{\Delta}(t)+\frac{\gamma}{\Gamma(\gamma+1)} \cdot\int_{a}^{t}(t+\gamma h-\sigma(s))_h^{(\gamma-1)}\eta^{\Delta}(s)\Delta s\right)\right.\\ &\qquad\qquad \left.\cdot\left[h^\nu\eta^{\Delta}(t)+\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma(t)}^{b}(s +\nu h-\sigma(t))_h^{(\nu-1)}\eta^{\Delta}(s)\Delta s\right]\right\}\Delta t\\ &=2h^{\gamma+\nu+1}(\nu-1)L_{vw}[\hat{y}](\tau)+2h^{\gamma+\nu+1}(\gamma-1)L_{vw}[\hat{y}](\sigma(\tau)). \end{split} \end{equation*} Finally, we have that \begin{equation*} \begin{split} &\int_{a}^{b} L_{ww}[\hat{y}](t)({_h}\Delta_b^\beta\eta(t))^2\Delta t\\ &=\int_{a}^{\sigma(\sigma(\tau))}L_{ww}[\hat{y}](t)\left[h^\nu\eta^{\Delta}(t)+\frac{\nu}{\Gamma(\nu+1)} \int_{\sigma(t)}^{b}(s+\nu h-\sigma(t))_h^{(\nu-1)}\eta^{\Delta}(s)\Delta s\right]^2\Delta t\\ &=\int_{a}^{\tau}L_{ww}[\hat{y}](t)\left[\frac{\nu}{\Gamma(\nu+1)}\int_{\sigma(t)}^{b}(s +\nu h-\sigma(t))_h^{(\nu-1)}\eta^{\Delta}(s)\Delta s\right]^2\Delta t\\ &\qquad +hL_{ww}[\hat{y}](\tau)(h^\nu-\nu h^\nu)^2+h^{2\nu+1}L_{ww}[\hat{y}](\sigma(\tau))\\ &=\int_{a}^{\tau}L_{ww}[\hat{y}](t)\left[h\frac{\nu}{\Gamma(\nu+1)}\left\{(\tau+\nu h -\sigma(t))_h^{(\nu-1)}-(\sigma(\tau)+\nu h-\sigma(t))_h^{(\nu-1)}\right\}\right]^2\\ &\qquad + hL_{ww}[\hat{y}](\tau)(h^\nu-\nu h^\nu)^2+h^{2\nu+1}L_{ww}[\hat{y}](\sigma(\tau)). \end{split} \end{equation*} Similarly as we did in \eqref{rui10}, we can prove that \begin{multline*} h\frac{\nu}{\Gamma(\nu+1)}\left\{(\tau+\nu h-\sigma(t))_h^{(\nu-1)}-(\sigma(\tau)+\nu h-\sigma(t))_h^{(\nu-1)}\right\}\\ =h^{2}\frac{\nu(1-\nu)}{\Gamma(\nu+1)}(\tau+\nu h-\sigma(t))_h^{(\nu-2)}. \end{multline*} Thus, we have that inequality \eqref{des1} is equivalent to \begin{multline} \label{des2} h\Biggl\{h^2L_{uu}[\hat{y}](t)+2h^{\gamma+1}L_{uv}[\hat{y}](t) +h^{\gamma}L_{vv}[\hat{y}](t)+L_{vv}(\sigma(t))(\gamma h^\gamma-h^\gamma)^2\\ +\int_{\sigma(\sigma(t))}^{b}h^3L_{vv}(s)\left(\frac{\gamma(\gamma-1)}{\Gamma(\gamma+1)}(s +\gamma h -\sigma(\sigma(t)))_h^{(\gamma-2)}\right)^2\Delta s\\ +2h^{\nu+1}L_{uw}[\hat{y}](t)(\nu-1)+2h^{\gamma+\nu}(\nu-1)L_{vw}[\hat{y}](t)\\ +2h^{\gamma+\nu}(\gamma-1)L_{vw}(\sigma(t))+h^{2\nu}L_{ww}[\hat{y}](t)(1-\nu)^2+h^{2\nu}L_{ww}[\hat{y}](\sigma(t))\\ +\int_{a}^{t}h^3L_{ww}[\hat{y}](s)\left(\frac{\nu(1-\nu)}{\Gamma(\nu+1)}(t+\nu h - \sigma(s))^{\nu-2}\right)^2\Delta s\Biggr\}\geq 0. \end{multline} Because $h>0$, \eqref{des2} is equivalent to \eqref{eq:LC}. The theorem is proved. \end{proof} The next result is a simple corollary of Theorem~\ref{thm1}. \begin{corollary}[The $h$-Legendre necessary condition -- \textrm{cf.} Result~1.3 of \cite{CD:Bohner:2004}] \label{CorDis:Bohner} Let $\mathbb{T}$ be the time scale $h \mathbb{Z}$, $h > 0$, with the forward jump operator $\sigma$ and the delta derivative $\Delta$. Assume $a, b \in \mathbb{T}$, $a < b$. If $\hat{y}$ is a solution to the problem \begin{equation*} \mathcal{L}(y(\cdot))=\int_{a}^{b}L(t,y^{\sigma}(t),y^\Delta(t))\Delta t \longrightarrow \min, \ y(a)=A, \ y(b)=B \, , \end{equation*} then the inequality \begin{equation} \label{LNCBohner} h^2L_{uu}[\hat{y}](t)+2hL_{uv}[\hat{y}](t)+L_{vv}[\hat{y}](t)+L_{vv}[\hat{y}](\sigma(t)) \geq 0 \end{equation} holds for all $t\in\mathbb{T}^{\kappa^2}$, where $[\hat{y}](t)=(t,\hat{y}^{\sigma}(t),\hat{y}^\Delta(t))$. \end{corollary} \begin{proof} Choose $\alpha=1$ and a Lagrangian $L$ that does not depend on $w$. Then, $\gamma=0$ and the result follows immediately from Theorem~\ref{thm1}. \end{proof} \begin{remark} When $h$ goes to zero we have $\sigma(t) = t$ and inequality \eqref{LNCBohner} coincides with Legendre's classical necessary optimality condition $L_{vv}[\hat{y}](t) \ge 0$ (\textrm{cf.}, \textrm{e.g.}, \cite{vanBrunt}). \end{remark} \section{Examples} \label{sec2} In this section we present some illustrative examples. \begin{example} \label{ex:2} Let us consider the following problem: \begin{equation} \label{eq:ex2} \mathcal{L}(y)=\frac{1}{2} \int_{0}^{1} \left({_0}\Delta_h^{\frac{3}{4}} y(t)\right)^2\Delta t \longrightarrow \min \, , \quad y(0)=0 \, , \quad y(1)=1 \, . \end{equation} We consider (\ref{eq:ex2}) with different values of $h$. Numerical results show that when $h$ tends to zero the $h$-fractional Euler-Lagrange extremal tends to the fractional continuous extremal: when $h \rightarrow 0$ (\ref{eq:ex2}) tends to the fractional continuous variational problem in the Riemann-Liouville sense studied in \cite[Example~1]{agr0}, with solution given by \begin{equation} \label{solEx2} y(t)=\frac{1}{2}\int_0^t\frac{dx}{\left[(1-x)(t-x)\right]^{\frac{1}{4}}} \, . \end{equation} This is illustrated in Figure~\ref{Fig:2}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{Fig2.eps} \caption{Extremal $\tilde{y}(t)$ for problem of Example~\ref{ex:2} with different values of $h$: $h=0.50$ ($\bullet$); $h=0.125$ ($+$); $h=0.0625$ ($\ast$); $h=1/30$ ($\times$). The continuous line represent function (\ref{solEx2}).}\label{Fig:2} \end{center} \end{figure} In this example for each value of $h$ there is a unique $h$-fractional Euler-Lagrange extremal, solution of \eqref{EL}, which always verifies the $h$-fractional Legendre necessary condition \eqref{eq:LC}. \end{example} \begin{example} \label{ex:1} Let us consider the following problem: \begin{equation} \label{eq:ex1} \mathcal{L}(y)=\int_{0}^{1} \left[\frac{1}{2}\left({_0}\Delta_h^\alpha y(t)\right)^2-y^{\sigma}(t)\right]\Delta t \longrightarrow \min \, , \quad y(0) = 0 \, , \quad y(1) = 0 \, . \end{equation} We begin by considering problem (\ref{eq:ex1}) with a fixed value for $\alpha$ and different values of $h$. The extremals $\tilde{y}$ are obtained using our Euler-Lagrange equation (\ref{EL}). As in Example~\ref{ex:2} the numerical results show that when $h$ tends to zero the extremal of the problem tends to the extremal of the corresponding continuous fractional problem of the calculus of variations in the Riemann-Liouville sense. More precisely, when $h$ approximates zero problem (\ref{eq:ex1}) tends to the fractional continuous problem studied in \cite[Example~2]{agr2}. For $\alpha=1$ and $h \rightarrow 0$ the extremal of (\ref{eq:ex1}) is given by $y(t)=\frac{1}{2} t (1-t)$, which coincides with the extremal of the classical problem of the calculus of variations \begin{equation*} \mathcal{L}(y)=\int_{0}^{1} \left(\frac{1}{2} y'(t)^2-y(t)\right) dt \longrightarrow \min \, , \quad y(0) = 0 \, , \quad y(1) = 0 \, . \end{equation*} This is illustrated in Figure~\ref{Fig:0} for $h = \frac{1}{2^i}$, $i = 1, 2, 3, 4$. \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \begin{center} \includegraphics[scale=0.45]{Fig0.eps} \caption{Extremal $\tilde{y}(t)$ for problem \eqref{eq:ex1} with $\alpha=1$ and different values of $h$: $h=0.5$ ($\bullet$); $h=0.25$ ($\times$); $h=0.125$ ($+$); $h=0.0625$ ($\ast$).}\label{Fig:0} \end{center} \end{minipage} \hspace{0.05cm} \begin{minipage}[b]{0.45\linewidth} \begin{center} \includegraphics[scale=0.45]{Fig1.eps} \caption{Extremal $\tilde{y}(t)$ for \eqref{eq:ex1} with $h=0.05$ and different values of $\alpha$: $\alpha=0.70$ ($\bullet$); $\alpha=0.75$ ($\times$); $\alpha=0.95$ ($+$); $\alpha=0.99$ ($\ast$). The continuous line is $y(t)=\frac{1}{2} t (1-t)$.}\label{Fig:1} \end{center} \end{minipage} \end{figure} In this example, for each value of $\alpha$ and $h$, we only have one extremal (we only have one solution to (\ref{EL}) for each $\alpha$ and $h$). Our Legendre condition \eqref{eq:LC} is always verified along the extremals. Figure~\ref{Fig:1} shows the extremals of problem \eqref{eq:ex1} for a fixed value of $h$ ($h=1/20$) and different values of $\alpha$. The numerical results show that when $\alpha$ tends to one the extremal tends to the solution of the classical (integer order) discrete-time problem. \end{example} Our last example shows that the $h$-fractional Legendre necessary optimality condition can be a very useful tool. In Example~\ref{ex:3} we consider a problem for which the $h$-fractional Euler-Lagrange equation gives several candidates but just a few of them verify the Legendre condition \eqref{eq:LC}. \begin{example} \label{ex:3} Let us consider the following problem: \begin{equation} \label{eq:ex3} \mathcal{L}(y)=\int_{a}^{b} \left({_a}\Delta_h^\alpha y(t)\right)^3+\theta\left({_h}\Delta_b^\alpha y(t)\right)^2\Delta t \longrightarrow \min \, , \quad y(a)=0 \, , \quad y(b)=1 \, . \end{equation} For $\alpha=0.8$, $\beta=0.5$, $h=0.25$, $a=0$, $b=1$, and $\theta=1$, problem (\ref{eq:ex3}) has eight different Euler-Lagrange extremals. As we can see on Table~\ref{candidates:ex3} only two of the candidates verify the Legendre condition. To determine the best candidate we compare the values of the functional $\mathcal{L}$ along the two good candidates. The extremal we are looking for is given by the candidate number five on Table~\ref{candidates:ex3}. \begin{table} \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|}\hline \# & $\tilde{y}\left(\frac{1}{4}\right)$ & $\tilde{y}\left(\frac{1}{2}\right)$ & $\tilde{y}\left(\frac{3}{4}\right)$ & $\mathcal{L}(\tilde{y})$ & Legendre condition \eqref{eq:LC}\\ \hline 1 & -0.5511786 & 0.0515282 & 0.5133134 & 9.3035911 & Not verified \\ \hline 2 & 0.2669091 & 0.4878808 & 0.7151924 & 2.0084203 & Verified \\ \hline 3 & -2.6745703 & 0.5599360 & -2.6730125 & 698.4443232 & Not verified \\ \hline 4 & 0.5789976 & 1.0701515 & 0.1840377 & 12.5174960 & Not verified \\ \hline 5 & 1.0306820 & 1.8920322 & 2.7429222 & -32.7189756 & Verified \\ \hline 6 & 0.5087946 & -0.1861431 & 0.4489196 & 10.6730959 & Not verified \\ \hline 7 & 4.0583690 & -1.0299054 & -5.0030989 & 2451.7637948 & Not verified \\ \hline 8 & -1.7436106 & -3.1898449 & -0.8850511 & 238.6120299 & Not verified \\ \hline \end{tabular} \smallskip \caption{There exist 8 Euler-Lagrange extremals for problem \eqref{eq:ex3} with $\alpha=0.8$, $\beta=0.5$, $h=0.25$, $a=0$, $b=1$, and $\theta=1$, but only 2 of them satisfy the fractional Legendre condition \eqref{eq:LC}.} \label{candidates:ex3} \end{table} \begin{table} \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \# & $\tilde{y}(0.1)$ & $\tilde{y}(0.2)$ & $\tilde{y}(0.3)$ & $\tilde{y}(0.4)$ & $\mathcal{L}(\tilde{y})$ & \eqref{eq:LC}\\ \hline 1 & -0.305570704 & -0.428093486 & 0.223708338 & 0.480549114 & 12.25396166 & No \\\hline 2 & -0.427934654 & -0.599520948 & 0.313290997 & -0.661831134 & 156.2317667 & No \\\hline 3 & 0.284152257 & -0.227595659 & 0.318847274 & 0.531827387 & 8.669645848 & No \\\hline 4 & -0.277642565 & 0.222381632 & 0.386666793 & 0.555841555 & 6.993518478 & No \\\hline 5 & 0.387074742 & -0.310032839 & 0.434336603 & -0.482903047 & 110.7912605 & No \\\hline 6 & 0.259846344 & 0.364035314 & 0.463222456 & 0.597907505 & 5.104389191 & Yes \\\hline 7 & -0.375094681 & 0.300437245 & 0.522386246 & -0.419053781 & 93.95316858 & No \\\hline 8 & 0.343327771 & 0.480989769 & 0.61204299 & -0.280908953 & 69.23497954 & No \\\hline 9 & 0.297792192 & 0.417196073 & -0.218013689 & 0.460556635 & 14.12227593 & No \\\hline 10 & 0.41283304 & 0.578364133 & -0.302235104 & -0.649232892 & 157.8272685 & No \\\hline 11 & -0.321401682 & 0.257431098 & -0.360644857 & 0.400971272 & 19.87468886 & No \\\hline 12 & 0.330157414 & -0.264444122 & -0.459803086 & 0.368850105 & 24.84475504 & No \\\hline 13 & -0.459640837 & 0.368155651 & -0.515763025 & -0.860276767 & 224.9964788 & No \\\hline 14 & -0.359429958 & -0.50354835 & -0.640748011 & 0.294083676 & 34.43515839 & No \\\hline 15 & 0.477760586 & -0.382668914 & -0.66536683 & -0.956478654 & 263.3075289 & No \\\hline 16 & -0.541587541 & -0.758744525 & -0.965476394 & -1.246195157 & 392.9592508 & No \\\hline \end{tabular} \smallskip \caption{There exist 16 Euler-Lagrange extremals for problem \eqref{eq:ex3} with $\alpha=0.3$, $h=0.1$, $a=0$, $b=0.5$, and $\theta=0$, but only 1 (candidate \#6) satisfy the fractional Legendre condition \eqref{eq:LC}.}\label{16dados} \end{table} For problem (\ref{eq:ex3}) with $\alpha=0.3$, $h=0.1$, $a=0$, $b=0.5$, and $\theta=0$, we obtain the results of Table~\ref{16dados}: there exist sixteen Euler-Lagrange extremals but only one satisfy the fractional Legendre condition. The extremal we are looking for is given by the candidate number six on Table~\ref{16dados}. \end{example} The numerical results show that the solutions to our discrete-time fractional variational problems converge to the classical discrete-time solutions when the fractional order of the discrete-derivatives tend to integer values, and to the fractional Riemann-Liouville continuous-time solutions when $h$ tends to zero. \section{Conclusion} \label{sec:conc} The discrete fractional calculus is a recent subject under strong current development due to its importance as a modeling tool of real phenomena. In this work we introduce a new fractional difference variational calculus in the time-scale $(h\mathbb{Z})_a$, $h > 0$ and $a$ a real number, for Lagrangians depending on left and right discrete-time fractional derivatives. Our objective was to introduce the concept of left and right fractional sum/difference (\textrm{cf.} Definition~\ref{def0}) and to develop the theory of fractional difference calculus. An Euler--Lagrange type equation \eqref{EL}, fractional natural boundary conditions \eqref{rui1} and \eqref{rui2}, and a second order Legendre type necessary optimality condition \eqref{eq:LC}, were obtained. The results are based on a new discrete fractional summation by parts formula \eqref{delf:sumPart} for $(h\mathbb{Z})_a$. Obtained first and second order necessary optimality conditions were implemented computationally in the computer algebra systems \textsf{Maple} and \textsf{Maxima}. Our numerical results show that: \begin{enumerate} \item the solutions of our fractional problems converge to the classical discrete-time solutions in $(h\mathbb{Z})_a$ when the fractional order of the discrete-derivatives tend to integer values; \item the solutions of the considered fractional problems converge to the fractional Riemann--Liouville continuous solutions when $h \rightarrow 0$; \item there are cases for which the fractional Euler--Lagrange equation give only one candidate that does not verify the obtained Legendre condition (so the problem at hands does not have a minimum); \item there are cases for which the Euler--Lagrange equation give only one candidate that verify the Legendre condition (so the extremal is a candidate for minimizer, not for maximizer); \item there are cases for which the Euler--Lagrange equation give us several candidates and just a few of them verify the Legendre condition. \end{enumerate} We can say that the obtained Legendre condition can be a very practical tool to conclude when a candidate identified via the Euler--Lagrange equation is really a solution of the fractional variational problem. It is worth to mention that a fractional Legendre condition for the continuous fractional variational calculus is still an open question. Undoubtedly, much remains to be done in the development of the theory of discrete fractional calculus of variations in $(h\mathbb{Z})_a$ here initiated. Moreover, we trust that the present work will initiate research not only in the area of the discrete-time fractional calculus of variations but also in solving fractional difference equations containing left and right fractional differences. One of the subjects that deserves special attention is the question of existence of solutions to the discrete fractional Euler--Lagrange equations. Note that the obtained fractional equation \eqref{EL} involves both the left and the right discrete fractional derivatives. Other interesting directions of research consist to study optimality conditions for more general variable endpoint variational problems \cite{Zeidan,AD:10b,MyID:169}; isoperimetric problems \cite{Almeida1,AlmeidaNabla}; higher-order problems of the calculus of variations \cite{B:J:05,RD,NataliaHigherOrderNabla}; to obtain fractional sufficient optimality conditions of Jacobi type and a version of Noether's theorem \cite{Bartos,Cresson:Frederico:Torres,gastao:delfim,MyID:149} for discrete-time fractional variational problems; direct methods of optimization for absolute extrema \cite{Bohner:F:T,mal:tor,T:L:08}; to generalize our fractional first and second order optimality conditions for a fractional Lagrangian possessing delay terms \cite{B:M:J:08,M:J:B:09}; and to generalize the results from $(h\mathbb{Z})_a$ to an arbitrary time scale $\mathbb{T}$. \section*{Acknowledgments} This work is part of the first author's PhD project carried out at the University of Aveiro under the framework of the Doctoral Programme \emph{Mathematics and Applications} of Universities of Aveiro and Minho. The financial support of the Polytechnic Institute of Viseu and \emph{The Portuguese Foundation for Science and Technology} (FCT), through the ``Programa de apoio \`{a} forma\c{c}\~{a}o avan\c{c}ada de docentes do Ensino Superior Polit\'{e}cnico'', PhD fellowship SFRH/PROTEC/49730/2009, is here gratefully acknowledged. The second author was supported by FCT through the PhD fellowship SFRH/BD/39816/2007; the third author by FCT through the R\&D unit \emph{Centre for Research on Optimization and Control} (CEOC) and the project UTAustin/MAT/0057/2008.
0.004208
TITLE: Proving $(Qv) \cdot (Qw) = v \cdot w$ given $Q$ is orthogonal QUESTION [1 upvotes]: there is a proof for $Q$ being a $3x3$ real matrix, and $v,w \ in \mathbb{R}^3$ that shows $(Qv) \cdot (Qw) = v \cdot w$. It goes like this: $(Qv) \cdot (Qw) = v \cdot (Q^{T}Qw) = v \cdot (Iw) = v \cdot w$. I don't see how the first first equality goes to the next, as in, how does the $Q$ from the front of $v$ go into the bracket with $Qw$? REPLY [0 votes]: Kuldeep Singh's Linear Algebra: Step by Step (2013) pp 325-326 details this more. I don't affiliate Kuldeep Singh.
0.006306