text
stringlengths 0
6.23M
| quality_score_v1
float64 0
1
|
---|---|
TITLE: Prove that for all $x, y\in\Bbb{R} $ i) $E(x+y)=E(x)E(y)$ and _ii)_ $E(-x)=\frac{1}{E(x)}$
QUESTION [2 upvotes]: The problem
Without using $L(x)=\ln(x)$ and $E(x)=e^x$ and given $$L(x)=\int_1^x\frac{dt}{t},\quad x>0$$
I have already proven that:
i) $L(xy)=L(x)+L(y),x,y>0$
ii) $L(1/x)=-L(x)$
that $L(2)<1$ , $L(3)>1$ and that $L$ is increasing.
The number $e$ is $L(e)=1$ and $2<e<3$.
Let $E$ be the inverse function to $L$, with $D_E=\Bbb{R}$ and $V_E=(0,\infty)$. Dont use $E(x)=e^x$.
1) Prove that $E$ is differentiable, and that $E'(x)=E(x)$
2) Prove that for all $x, y\in\Bbb{R} $
i) $E(x+y)=E(x)E(y)$
ii) $E(-x)=\frac{1}{E(x)}$
3) Let $n\in\Bbb{N}$. Prove that $E(x)^n=E(nx)$, and that $E(n)=e^n$
My Work
1) $L$ is continues and strictly increasing and $L$ is differentiable for all $x>0$ , $x \in (0, \infty)$. Then $E$ is differentiable in $y=L(x)$ where $y$ is $y \in D_E=\Bbb{R}$
$$L'(E(x))*E'(x)=1$$
$$\left(\int_1^{E(x)} \frac{dt}{t}\right)' *E'(x)=1$$
$$\frac{1}{E(x)}*E'(x)=1$$
$$E'(x)=E(x)$$
2 I see that what I need to prove now, is kind of the opposite of what I have alredy proven:
i) $$L(xy)=\int_1^{xy}\frac{dt}{t}=\int_1^{x}\frac{dt}{t}+\int_x^{xy}\frac{dt}{t}=\int_1^{x}\frac{dt}{t}+\int_{\frac{1}{x}*x}^{\frac{1}{x}*{xy}}\frac{x*du}{x*u}=\int_1^{x}\frac{dt}{t}+\int_1^{y}\frac{du}{u}=L(x)+L(y)$$
Using $u=\frac{t}{x}$
ii)$$L(\frac{1}{x})=\int_1^{\frac{1}{x}}\frac{dt}{t}=\int_\frac{1}{1}^{\frac{1}{\frac{1}{x}}}\frac{1}{\frac{1}{u}}\left(\frac{-1}{u^2}\right)du=\int_1^x\frac{-u}{u^2}du=-\int_1^x\frac{1}{u}du=-L(x)$$
using $u=1/t$
However I still don't know how to solve the problem, all and any help are welcome.
REPLY [2 votes]: For $(i)$: we know for every $x>0,y>0$,$$L(xy) = L(x) +L(y)\quad (*)$$ For any $X,Y\in\mathbb R$, there is a unique $x,y>0$ such that $X=L(x), Y=L(y)$. Then applying $E$ to both sides of $(*)$ for these $x,y$,
$$ xy = E(X+Y).$$
but $x = L^{-1}(X) = E(X) $ and $y = E(Y)$. Thus
$$ E(X)E(Y) = E(X+Y).$$
There's a similar game to play with $(ii)$, see if you can do it. | 0.010889 |
Interested in becoming an OECD Champion Mayor or know a local leader who has make Inclusive Growth a central part of his or her policy agenda? The OECD Champion Mayors for Inclusive Growth initiative aims to elevate the voice of mayors as global leaders in the fight against inequality.
The OECD is interested in mayors who have made strong efforts to tackle inequalities and advocate for an inclusive growth agenda. By joining this global coalition, mayors have the opportunity to exchange directly with local leaders worldwide to share strategies for more inclusive cities, access an interactive web platform of good policies and practices, and ensure that local perspectives help shape national strategies and international agendas. If you feel that you or your local leader fits this profile, you can contact us at ChampionMayors@oecd.org for more information.
The Champion Mayors initiative is pleased to count on a range of supporting institutions to help shape this work, including the Ford Foundation, Brookings Institution, C40 Cities Climate Leadership Group, Cities Alliance, ICLEI Local Governments for Sustainability, Lincoln Institute of Land Policy, National League of Cities, United Cities and Local Governments (UCLG) and United Way Worldwide. | 0.832658 |
TITLE: Projective Linear Group and Projective Space
QUESTION [2 upvotes]: my aim : Why $GL(V)/Z(GL(V))$ is termed as projective general linear group?
The reason seen in books says This group acts on the projective set/space faithfully.
But, there can be many groups acting faithfully on projective space, can we call them also as projective general linear group?
I confused, and not understood the reasonings. I faced many questions from it:
Let $V$ be $n$-dimensional vector space over $F$ and $V^*$ denote the collection of one dimensional subspaces of $V$, it is called projective space.
For $V^*$, we can first consider its symmetric group $Symm(V^*)$- the set of all the bijections from $V^*$ to itself. If we put some structure on $V^*$, and if we look bijections preserving the structure on $V^*$, then we obtain a subgroup of $Symm(V^*)$
Question 1. What stuctural properties on $V^*$ the group $GL(V)/Z(GL(V))$ preserves?
question 2. Can we say that, $GL(V)/Z(GL(V))$ is precisely the set of all bijections from $V^*$ to itself which preserve certain structure on $V^*$?
Question 3. Is there some notion like Projective group associated to $V^*$ instead of Projective Linear Group? If yes, how that group differs from $GL(V)/Z(GL(V))$?
Question 4. Since passing from $V$ to $V^*$, we have lost linearity: $V^*$ is not a vector (linear) space. Then what the term linear refers to in Projective Linear Group?
[Please point out, if questions are not clear.]
REPLY [1 votes]: I will refer to the projective space as $\mathrm{PG}(V)$ and not $V^{*}$, since $V^{*}$ usually has a different meaning.
The projective space $\mathrm{PG}(V)$ has an underlying vector space structure, so $\mathrm{GL}(V)$ acts naturally on it; however this action is not faithful, since scalar multiples of the identity matrix all induce the identity map on $\mathrm{PG}(V)$. So by modding out by the center we make a group that acts faithfully. It is called the projective general linear group because it (faithfully) represents the action of the general linear group on the projective space.
The group preserving the structure of $\mathrm{PG}(V)$ is called the collineation group. For a projective space of the form $\mathrm{PG}(V)$ (where $V$ is a vector space over a field $\mathbb{F}$), the full collineation group is $\mathrm{PGL}(V) \rtimes \mathrm{Gal}(\mathbb{F})$. Notice that $\mathrm{Gal}(\mathbb{F})$ cannot be represented as an $\mathbb{F}$-linear map on $V$. This is sometimes called the projective semilinear group.
All elements of the collineation group preserve all subspaces of $V$. There are some differences between linear collineations and semilinear collineations. The linear maps are what are called homographies. These are typically defined in terms of maps known as central collineations. In particular, every homography is a product of a finite number of central collineations.
Central collineation: A map $\mu : \mathrm{PG}(V) \to \mathrm{PG}(V)$ is called a central collineation if there is some hyperplane $H$ fixed by $\mu$ (that is, $\mu$ restricted to the hyperplane acts as the identity map) called the axis, and a point $O$ (a point in a projective geometry is a one-dimensional subspace of the vector space) which is fixed linewise by $\mu$ (that is, each line through $O$ is stabilized by $\mu$). | 0.056798 |
Last week we painted flowers using everyday utensils. I showed our daffodil paintings last week so to follow are; orchids, roses and tulips.
First we painted purple flowers, with the orchid as the main flower, using
drinking straws.
One end of the large straw was slit and folded back so when it was dipped in paint then stamped on paper it left a 4, 5, or 6 point mark depending how many slits I cut.
To add stems and leaves we used an eye dropper to drip green paint on to their pictures,
then used a smaller straw to blow the paint around.
Next we explored roses and used scouring pads to paint them with pink paint.
And egg cartons cut into single cups dipped into white paint.
We used the eye droppers to draw the stems.
I found the little brown bottles and eye droppers at a recycling store in Vancouver. They are the perfect size for the little one’s hands.
Lastly we painted tulips. I found this idea at Blog Me Mom.
Forks are dipped into paint then pressed on to paper to make the flower.
I made stem shapes from foam meat trays and glued a clothespin to the back for easy dipping in the paint.
I guess we could of called all these paintings ‘kitchen’ art since all the implements
could be found in that room of the house.
How does your art garden grow? | 0.676978 |
Hi guys, 1st timer so be gentle with me please :sillysmi: I've had my Nikon D5000 for around 4 months now (bought with the 18-55mm kit lens) and have to say i'm really pleased with it. We (my family and I) are off on holiday to Malaysia in 5 weeks and i'm obviously wanting the best photo's possible on our trip. What i'm looking for is some advice on whether I should get another lens to go alongside the kit lens I already have? I would mostly be taken family portraits and landscapes (both at day and night). The more I have been reading, the more complicated I have found it Will I be better getting a prime lens like the 35mm 1.8 or something like a 55-200mm VR lens?. Or would it be better just to keep and use my kit lens only?. I'm not normally this picky but I really want some great photo's of our trip to Malaysia. Many thanks in advance. | 0.001053 |
Q: why are there so many “B vitamins” (B6, B12, etc)?
A: Well Chris, this one turned out to be easier than I expected. It seems that when folks were naming vitamins, it was originally understood that Vitamin B was a single chemical compound. A gentleman by the name of Robert R. Williams (an arm-chair vitamin researcher, it turns out) first isolated the chemical compound and structure of what we now know as thiamine. This, as I understand it, was the original “Vitamin B” and is still known today as Vitamin B1.
Later of course, scientists determined that what they called “Vitamin B” was actually a complex of several compounds, not all of which coexist in a food at the same time. Each of these sub compounds of the complex is responsible for aiding different metabolic functions in our bodies, and were named likely in order of discovery. Now we have such familiar (from cereal boxes, at least) compound names as Riboflavin (B2) and Folic Acid, or Folate (B9).
So there it is…the answer to Ask Dan’s glorious (maybe?) return.
Go ahead and post new questions in the comments section, hit me on Twitter, or email your quandaries to danielcwarshaw [at] gmail [dot] com.
Will soccer ever unseat one of the big 3 (NFL, MLB, NBA) in popularity among the american public? (personally I think it has already overtaken NHL) | 0.62573 |
\begin{document}
\begin{frontmatter}
\title{The range of tree-indexed random walk in~low~dimensions}
\runtitle{The range of tree-indexed random walk}
\begin{aug}
\author[A]{\fnms{Jean-Fran\c cois}~\snm{Le Gall}\corref{}\ead[label=e1]{jean-francois.legall@math.u-psud.fr}}
\and
\author[A]{\fnms{Shen}~\snm{Lin}\ead[label=e2]{shen.lin.math@gmail.com}}
\runauthor{J.-F. Le Gall and S. Lin}
\affiliation{Universit\'e Paris-Sud}
\address[A]{D\'epartment de math\'ematiques\\
Universit\'e Paris-Sud\\
91405 Orsay\\
France\\
\printead{e1}\\
\phantom{E-mail:\ }\printead*{e2}}
\end{aug}
\received{\smonth{1} \syear{2014}}
\begin{abstract}
We study the range $R_n$ of a random walk on the $d$-dimensional
lattice $\Z^d$ indexed by a random tree with $n$ vertices. Under the
assumption that the random walk is centered and has finite fourth
moments, we prove in dimension $d\leq3$ that $n^{-d/4}R_n$ converges
in distribution to the Lebesgue measure of the support of the
integrated super-Brownian excursion (ISE). An auxiliary result shows
that the suitably rescaled local times of the tree-indexed random walk
converge in distribution to the density process of ISE. We obtain
similar results for the range of critical branching random walk in $\Z
^d$, $d\leq3$. As an intermediate estimate, we get exact asymptotics
for the probability that a critical branching random walk starting with
a single particle at the origin hits a distant point. The results of
the present article complement those derived in higher dimensions in
our earlier work.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60G50}
\kwd{60J80}
\kwd[; secondary ]{60G57}
\end{keyword}
\begin{keyword}
\kwd{Tree-indexed random walk}
\kwd{range}
\kwd{ISE}
\kwd{branching random walk}
\kwd{super-Brownian motion}
\kwd{hitting probability}
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec1}
In the present paper, we continue our study of asymptotics for the
number of distinct sites of the lattice visited by a tree-indexed
random walk.
We consider (discrete) plane trees, which are
rooted ordered trees that can be viewed as describing the genealogy of
a population starting with one ancestor or root, which is denoted by
the symbol $\varnothing$. Given such a tree $\mathcal{T}$ and a
probability measure
$\theta$ on $\Z^d$, we can consider the random
walk with jump distribution $\theta$ indexed by the tree $\mathcal
{T}$. This
means that we assign a
(random) spatial
location $Z_{\mathcal{T}}(u)\in\Z^d$ to every vertex $u$ of
$\mathcal{T}$, in the
following way. First, the
spatial location $Z_{\mathcal{T}}(\varnothing)$ of the root is the
origin of $\Z^d$.
Then we assign
independently to every
edge $e$ of the tree $\mathcal{T}$ a random variable $Y_e$ distributed
according
to $\theta$, and we let
the spatial location $Z_{\mathcal{T}}(u)$ of the vertex $u$ be the sum
of the
quantities $Y_e$ over all edges $e$ belonging to the simple path
from $\varnothing$ to $u$ in the tree. The number of distinct spatial locations
is called the range of the tree-indexed random walk $Z_{\mathcal{T}}$.
In our previous work \cite{LGL}, we stated the following result. Let
$\theta$ be a probability distribution on $\Z^d$, which is symmetric
with finite support and is not supported on a
strict subgroup of $\Z^d$, and for every
integer $n\geq1$, let $\mathcal{T}^\circ_n$ be a random tree
uniformly distributed
over all plane trees with $n$ vertices. Conditionally given
$\mathcal{T}^\circ_n$, let $Z_{\mathcal{T}^\circ_{n}}$ be a random
walk with jump distribution
$\theta$ indexed by $\mathcal{T}^\circ_n$,
and let $R_n$ stand for the range of $Z_{\mathcal{T}^\circ_{n}}$. Then:
\begin{itemize}
\item
if $d\geq5$,
\[
\frac{1}{n} R_n \build{\la}_{n\to\infty}^{\mathrm{(P)}}
c_\theta,
\]
where $c_\theta>0$ is a constant depending on $\theta$, and $\build
{\la
}_{}^{\mathrm{(P)}}$ indicates convergence in probability;
\item
if $d=4$,
\[
\frac{\log n}{n} R_n \build{\la}_{n\to\infty}^{L^2} 8
\pi^2 \sigma^4,
\]
where $\sigma^2=(\operatorname{ det} M_\theta)^{1/4}$, with $M_\theta$ denoting
the covariance matrix of $\theta$;
\item
if $d\leq3$,
\begin{equation}
\label{intro-conv} n^{-d/4} R_n \build{\la}_{n\to\infty}^{\mathrm{(d)}}
c_\theta\lambda_d\bigl(\operatorname{ supp}(\mathcal{I})\bigr),
\end{equation}
where $c_\theta=2^{d/4}(\operatorname{ det} M_{\theta})^{1/2}$ is a constant
depending on $\theta$, and $ \lambda_d(\operatorname{ supp}(\mathcal{I}))$ stands
for the
Lebesgue measure of the support of the random measure on $\R^{d}$ known
as Integrated Super-Brownian Excursion or
ISE (see Section~\ref{sec2.3} below for a definition of ISE in terms
of the
Brownian snake, and note that our
normalization is slightly different from the one in \cite{Al}).
\end{itemize}
Only the cases $d\geq5$ and $d=4$ were proved in \cite{LGL}, in fact
in a greater generality than
stated above, especially when $d\geq5$. In the present work, we
concentrate on the case $d\leq3$ and we
prove a general version of the convergence \eqref{intro-conv}, where
instead of considering a uniformly
distributed plane tree with $n$ vertices we deal with a Galton--Watson
tree with offspring distribution $\mu$
conditioned to have $n$ vertices.
Let us specify the assumptions that will be in force throughout this
article. We always assume that $d\leq3$ and:
\begin{itemize}
\item
$\mu$ is a nondegenerate critical offspring
distribution on $\Z_+$, such that, for some $\lambda>0$,
\[
\sum_{k=0}^\infty e^{\lambda k} \mu(k) <
\infty,
\]
and we set $\rho:=(\operatorname{ var} \mu)^{1/2}>0$;
\item
$\theta$ is a probability measure on $\Z^d$, which is
not supported on a strict subgroup of $\Z^d$; $\theta$ is such that
\begin{equation}
\label{hypoJM} \lim_{r\to+\infty} r^4 \theta\bigl(\bigl\{x
\in\Z^d\dvtx |x|>r\bigr\}\bigr)=0,
\end{equation}
and $\theta$ has zero mean; we set $\sigma:=(\operatorname{ det} M_\theta
)^{1/2d}>0$, where
$M_\theta$ denotes the covariance matrix of $\theta$.
\end{itemize}
Note that \eqref{hypoJM} holds if $\theta$ has finite fourth moments.
For every $n\geq1$ such that this makes sense, let $\mathcal{T}_n$ be a
Galton--Watson tree with offspring
distribution $\mu$ conditioned to have $n$ vertices. Note that the case
when $\mathcal{T}_n$ is uniformly distributed over
plane trees with $n$ vertices is recovered when $\mu$ is the geometric
distribution with
parameter $1/2$ (see, e.g., Section~2.2 in \cite{LGM}). Let
$Z_{\mathcal{T}_n}$
denote the random walk with jump distribution
$\theta$ indexed by $\mathcal{T}_n$, and let $R_n$
be the range of $Z_{\mathcal{T}_n}$. Theorem~\ref{conv-range} below
shows that
the convergence (\ref{intro-conv}) holds, provided that $c_\theta$
is replaced by the constant $2^{d/2}\sigma^d\rho^{-d/2}$.
An interesting auxiliary result is an invariance principle for ``local
times'' of our tree-indexed
random walk. For every $a\in\Z^d$, let
\[
L_n(a)= \sum_{u\in\mathcal{T}_n} \mathbf{1}_{\{Z_{\mathcal
{T}_n}(u)=a\}}
\]
be the number of visits of $a$ by the tree-indexed random walk
$Z_{\mathcal{T}
_n}$. For $x=(x_1,\ldots,x_d)\in\R^d$,
set $\lfloor x\rfloor:=(\lfloor x_1\rfloor,\ldots,\lfloor x_d\rfloor)$.
Then Theorem~\ref{convLT}
shows that the process
\[
\bigl(n^{{d}/{4} -1} L_n\bigl(\bigl\lfloor n^{1/4}x \bigr
\rfloor\bigr) \bigr)_{x\in\R
^d\setminus\{0\}}
\]
converges as $n\to\infty$, in the sense of weak convergence of
finite-dimensional marginals,
to the density process of ISE (up to scaling constants and a linear
transformation of the variable $x$). Notice that the latter density
process exists because
$d\leq3$, by results due to Sugitani \cite{Sug}. In dimension $d=1$,
this invariance principle
has been obtained earlier in a stronger (functional) form by
Bousquet-M\'elou and Janson \cite{BMJ}, Theorem~3.6, in a particular
case, and then
by Devroye and Janson~\cite{DJ}, Theorem~1.1, in a more general setting.
Such a strengthening might also be possible when $d=2$ or $3$, but we
have chosen not
to investigate this question here as it is not relevant to our main
applications. In dimensions $2$ and $3$,
Lalley and Zheng \cite{LZ2}, Theorem~1, also give a closely related
result for local times of critical branching random walk
in the case of a Poisson offspring distribution and for a particular
choice of~$\theta$.
Our tree-indexed random walk can be viewed as a branching random walk
starting with
a single initial particle and conditioned to have a fixed total
progeny. Therefore, it is not
surprising that our main results have analogs for branching random
walks, as
it was already the case in dimension $d\geq4$ (see Propositions~20 and~21
in \cite{LGL}). For every integer
$p\geq1$, consider a (discrete time) branching random walk starting
initially with
$p$ particles located at the origin of $\Z^d$, such that the offspring number
of each particle is distributed according to $\mu$, and each newly born
particle jumps
from the location of its parent according to the jump
distribution $\theta$. Let $\mathcal{V}^{[p]}$
stand for the set of all sites of $\Z^d$ visited by this branching
random walk. Then Theorem~\ref{rangeBRW}
shows that, similarly as in \eqref{intro-conv}, the asymptotic
distribution of $p^{-d/2}\#\mathcal{V}^{[p]}$ is the Lebesgue
measure of the range of a super-Brownian motion starting from $\delta
_0$ (note again
that this Lebesgue measure is positive because $d\leq3$, see \cite
{DIP} or \cite{Sug}). In a related direction, we mention the article of
Lalley and Zheng~\cite{LZ},
which gives estimates for the number of occupied sites \textit{at a given
time} by a critical nearest neighbor
branching random walk in $\Z^d$.
Our proof of Theorem~\ref{rangeBRW} depends on an asymptotic estimate
for the hitting probability
of a distant point by branching random walk, which seems to be new and
of independent interest. To be specific, consider
the set $\mathcal{V}^{[1]}$ of all sites visited by the branching
random walk starting with a single
particle at the origin. Consider for simplicity the isotropic case
where $M_\theta=\sigma^2 \operatorname{ Id}$, where $\operatorname{ Id}$ is the identity
matrix. Then Theorem~\ref{estim-visit}
shows that
\[
\lim_{|a|\to\infty} |a|^2 P \bigl(a\in\mathcal{V}^{[1]}
\bigr) = \frac
{2(4-d)\sigma^2}{\rho^2}.
\]
See Section~\ref{sec5.1} for a discussion of similar estimates in
higher dimensions.
Not surprisingly, our proofs depend on the known relations between
tree-indexed random walk
(or branching random walk) and the Brownian snake (or super-Brownian
motion). In particular,
we make extensive use of a result of Janson and Marckert \cite{JM}
showing that the ``discrete
snake'' coding our tree-indexed random walk $Z_{\mathcal{T}_n}$
converges in
distribution in a strong (functional) sense
to the Brownian snake driven by a normalized Brownian excursion. It
follows from this convergence
that the set of all sites visited by the tree-indexed random walk
converges in distribution
(modulo a suitable rescaling) to the support of ISE, in the sense of
the Hausdorff distance between
compact sets. But, of course, this is not sufficient to derive
asymptotics for the \textit{number}
of visited sites.
Our assumptions on $\mu$ and $\theta$ are similar to those in \cite
{JM}. We have not striven for
the greatest generality, and it is plausible that these assumptions can
be relaxed.
See, in particular, \cite{JM} for a discussion of the necessity of the
existence of exponential
moments for the offspring distribution $\mu$. It might also be possible
to replace our condition \eqref{hypoJM} on $\theta$
by a second moment assumption, but this would require different methods
as the results of \cite{JM} show that the strong convergence of
discrete snakes to the Brownian snake
no longer holds without \eqref{hypoJM}.
The paper is organized as follows. Section~\ref{sec2} presents our
main notation
and gives some preliminary
results about the Brownian snake. Section~\ref{sec3} is devoted to our main
result about the
range of tree-indexed random walk in dimension \mbox{$d\leq3$}. Section~\ref{sec4}
discusses similar
results for branching random walk, and Section~\ref{sec5} presents a few
complements and open questions.
\section{Preliminaries on trees and the Brownian snake}\label{sec2}
\label{preli}
\subsection{Finite trees}\label{sec2.1}
\label{fitree}
We use the standard formalism for plane trees. We set
\[
\mathcal{U}:=\bigcup_{n=0}^\infty
\N^n,
\]
where $\N=\{1,2,\ldots\}$ and $\N^0=\{\varnothing\}$. If
$u=(u_1,\ldots,u_n)\in\mathcal{U}$,
we set $|u|=n$ [in particular $|\varnothing|=0$].
We write $\prec$ for the lexicographical order on $\mathcal{U}$, so
that $\varnothing\prec1
\prec(1,1) \prec2$, for instance.
A plane tree (or rooted ordered tree) $ \mathcal{T}$ is a finite
subset of
$\mathcal{U}$
such that:
\begin{longlist}[(iii)]
\item[(i)] $\varnothing\in\mathcal{T}$;
\item[(ii)] If $u=(u_1,\ldots,u_n)\in\mathcal{T}\setminus\{
\varnothing\}$ then
$\check u:= (u_1,\ldots,u_{n-1})\in\mathcal{T}$;
\item[(iii)] For every $u=(u_1,\ldots,u_n)\in\mathcal{T}$, there
exists an
integer $k_u(\mathcal{T})\geq0$
such that, for every $j\in\N$, $(u_1,\ldots,u_n,j)\in\mathcal{T}$
if and only if
$1\leq j\leq
k_u(\mathcal{T})$.
\end{longlist}
The notions of a descendant or of an ancestor of a vertex of $\mathcal
{T}$ are
defined in an obvious way. If $u,v\in\mathcal{T}$, we
will write $u\wedge v\in\mathcal{T}$ for the most recent common
ancestor of $u$
and $v$. We denote the set of all planes trees by $\T_f$.
Let $\mathcal{T}$ be a tree with $p=\#\mathcal{T}$ vertices and let
$\varnothing
=v_0\prec v_1\prec\cdots\prec v_{p-1}$ be
the vertices of $\mathcal{T}$ listed in lexicographical order. We
define the height function $(H_i)_{0\leq i\leq p}$
of $\mathcal{T}$
by setting $H_i=|v_i|$ for every $0\leq i\leq p-1$, and $H_p=0$ by convention.
Recall that we have fixed a probability measure $\mu$ on $\Z_+$
satisfying the assumptions given in Section~\ref{sec1}, and that $\rho
^2=\operatorname{
var} \mu$. The law of the Galton--Watson tree with offspring
distribution $\mu$ is a probability measure on the space $\T_f$, which
is denoted by $\Pi_\mu$ (see, e.g., \cite{LG1}, Section~1).
We will need some information about the law of the total progeny
$\#\mathcal{T}$ under~$\Pi_\mu$.
It is well known (see, e.g., \cite{LG1}, Corollary~1.6) that this law
is the same as the law of the first hitting time of $-1$ by
a random walk on $\Z$ with jump distribution $\nu(k)=\mu(k+1),
k=-1,0,1,\ldots$
started from $0$. Combining this with Kemperman's formula (see,
e.g., \cite{pitman}, page 122)
and using a standard local limit theorem, one gets
\begin{equation}
\label{Kemp1} \lim_{k\to\infty} k^{1/2}
\Pi_\mu(\#\mathcal{T}\geq k)= \frac
{2}{\rho\sqrt{2\pi}}.
\end{equation}
Suppose that $\mu$ is not supported on a strict subgroup of $\Z$, so
that the
random walk with jump distribution $\nu$ is aperiodic. The preceding
asymptotics can then
be strengthened in the form
\begin{equation}
\label{Kemp2} \lim_{k\to\infty} k^{3/2}
\Pi_\mu(\#\mathcal{T}= k)= \frac
{1}{\rho\sqrt{2\pi}}.
\end{equation}
\subsection{Tree-indexed random walk}\label{sec2.2}
\label{TRW}
A ($d$-dimensional) spatial tree is a pair $(\mathcal{T},(z_u)_{u\in
\mathcal{T}})$
where $\mathcal{T}\in\T_f$ and $z_u\in\Z^d$ for every $u\in
\mathcal{T}$. We let $\T_f^*$
be the set of all spatial trees.
Recall that $\theta$ is a probability measure on $\Z^d$ satisfying the
assumptions listed in the \hyperref[sec1]{Introduction}.
We write
$\Pi^*_{\mu,\theta}$ for the probability distribution on $\T_f^*$ under
which $\mathcal{T}$ is distributed according to $\Pi_\mu$ and,
conditionally on $\mathcal{T}$, the ``spatial locations'' $(z_u)_{u\in
\mathcal{T}}$ are
distributed as random walk indexed by $\mathcal{T}$,
with jump distribution $\theta$, and started from $0$ at the root
$\varnothing$: This means that,
under the probability measure $\Pi^*_{\mu,\theta}$, we have
$z_\varnothing=0$ a.s. and,
conditionally on $\mathcal{T}$, the quantities $(z_u-z_{\check u},
u\in\mathcal{T}
\setminus\{\varnothing\})$ are independent
and distributed according to $\theta$.
\subsection{The Brownian snake}\label{sec2.3}
We refer to \cite{Zurich} for the basic facts about the Brownian snake
that we will use. The Brownian snake
$(W_s)_{s\geq0}$ is
a Markov process taking values in the space $\mathcal{W}$ of all
($d$-dimensional) stopped paths: Here,
a~stopped path $w$ is just a continuous mapping $w\dvtx [0,\zeta_{(w)}] \la
\R^d$, where the
number $\zeta_{(w)}\geq0$, which depends on $w$, is called the
lifetime of $w$. A stopped path
$w$ with zero lifetime will be identified with its starting point
$w(0)\in\R^d$. The endpoint $w(\zeta_{(w)})$
of a stopped path $w$ is denoted by $\wh w$.
It will be convenient to argue on the canonical
space $C(\R_+, \mathcal{W})$ of all continuous mappings from $\R_+$
into $\mathcal{W}$, and
to let $(W_s)_{s\geq0}$ be the canonical process on this space. We
write $\zeta_s:= \zeta_{(W_s)}$ for the
lifetime of $W_s$. If $x\in\R^d$, the law of the Brownian snake
starting from $x$ is the probability measure $\P_x$
on $C(\R_+, \mathcal{W})$ that is characterized as follows:
\begin{enumerate}[(ii)]
\item[(i)] The distribution of $(\zeta_s)_{s\geq0}$ under $\P_x$ is
the law of a reflected linear Brownian
motion on $\R_+$ started from $0$.
\item[(ii)] We have $W_0=x$, $\P_x$ a.s. Furthermore, under $\P_x$ and
conditionally on $(\zeta_s)_{s\geq0}$,
the process $(W_s)_{s\geq0}$ is (time-inhomogeneous) Markov with
transition kernels specified as follows. If
$0\leq s<s'$,
\begin{itemize}
\item
$W_{s'}(t)=W_s(t)$ for every $0\leq t\leq m_\zeta
(s,s'):= \min\{\zeta_r\dvtx s\leq r\leq s'\}$;
\item
$(W_{s'}(m_{\zeta}(s,s')+t)-W_{s'}(m_{\zeta
}(s,s')))_{0\leq t\leq\zeta_{s'}-m_\zeta(s,s')}$ is
a standard Brownian motion in $\R^d$ independent of $W_s$.
\end{itemize}
\end{enumerate}
We will refer to the process $(W_s)_{s\geq0}$ under $\P_0$ as the
standard Brownian snake.
We will also be interested in (infinite) excursion
measures of the Brownian snake, which we denote by $\N_x$, $x\in\R^d$.
For every
$x\in\R^d$, the distribution of the process $(W_s)_{s\geq0}$
under $\N_x$ is characterized by properties analogous to (i) and~(ii)
above, with the
only difference that in (i) the law of reflected linear Brownian
motion is replaced by the It\^o measure of positive excursions of linear
Brownian motion, normalized in such a way that $\N_x(\sup\{\zeta
_s\dvtx s\geq0\}>\ve)=(2\ve)^{-1}$, for every $\ve>0$.
We write $\gamma:= \sup\{s\geq0 \dvtx \zeta_s>0\}$, which
corresponds to the duration of the excursion
under $\N_x$. A special role will be played by the probability measures
$\N^{(r)}_x:= \N_x(\cdot\mid\gamma=r)$, which are defined
for every $x\in\R^d$ and every $r>0$.
Under $\N^{(r)}_x$, the ``lifetime process'' $(\zeta_s)_{0\leq s\leq
r}$ is a Brownian excursion with duration $r$. From the analogous
decomposition for the It\^o measure of Brownian excursions, we have
\begin{equation}
\label{decoIto} \N_0 = \int_0^\infty
\frac{\mathrm{d}r}{2\sqrt{2\pi r^3}} \N^{(r)}_0.
\end{equation}
The total occupation measure of the Brownian snake is the finite
measure $\mathcal{Z}$ on $\R^d$ defined under $\N_x$,
or under $\N^{(r)}_x$, by the formula
\[
\langle\mathcal{Z}, \varphi\rangle= \int_0^\gamma
\,\mathrm{d}s\, \varphi(\wh W_s),
\]
for any nonnegative measurable function $\varphi$ on $\R^d$.
Under
$\N^{(1)}_x$, $\mathcal{Z}$ is a random probability measure, which in
the case
$x=0$ is
called ISE for integrated super-Brownian excursion [the measure
$\mathcal{I}$ in \eqref{intro-conv} is
thus distributed as $\mathcal{Z}$ under $\N^{(1)}_0$]. Note that our
normalization of
ISE is slightly different from the one originally proposed by Aldous
\cite{Al}.
The following result will be derived from known properties of
super-Brownian motion via
the connection between the Brownian snake and superprocesses.
\begin{proposition}
\label{densityZ}
Both $\N_x$ a.e. and $\N^{(1)}_x$ a.s., the random measure
$\mathcal{Z}$ has a continuous density on $\R^d$, which will
be denoted by $(\ell^y,y\in\R^d)$.
\end{proposition}
\begin{remark*}
When $d=1$, this result, under the measure $\N^{(1)}_0$,
can be found in~\cite{BMJ}, Theorem~2.1.
\end{remark*}
\begin{pf*}{Proof of Proposition \ref{densityZ}} By translation invariance, it is enough to consider the case
$x=0$. We rely on the Brownian snake construction of
super-Brownian motion to deduce the statement of the proposition from Sugitani's
results \cite{Sug}. Let $(W^i)_{i\in I}$ be a Poisson point measure
on $C(\R_+,\mathcal{W})$ with intensity $\N_0$. With every $i\in I$,
we associate the occupation measure $\mathcal{Z}^i$ of $W^i$.
Then Theorem IV.4 in \cite{Zurich} shows that there exists
a super-Brownian motion $(X_t)_{t\geq0}$ with branching mechanism
$\psi(u)=2u^2$ and initial value $X_0=\delta_0$, such that
\[
\int_0^\infty\,\mathrm{d}t\, X_t = \sum
_{i\in I} \mathcal{Z}^i.
\]
As a consequence of \cite{Sug}, Theorems 2 and 3, the random measure
$\int_0^\infty\,\mathrm{d}t\, X_t $ has a.s. a continuous density on $\R
^d\setminus\{0\}$. On the other hand, let
$B(0,\ve)$ denote the closed ball of radius $\ve$ centered at $0$ in
$\R
^d$. Then,
for every $\ve>0$, the event
\[
\mathcal{A}_\ve:= \bigl\{\#\bigl\{i\in I\dvtx \mathcal{Z}^i
\bigl(B(0,\ve)^c\bigr) >0\bigr\} =1\bigr\}
\]
has positive probability (see, e.g., \cite{Zurich}, Proposition V.9).
On the event $\mathcal{A}_\ve$,
write $i_0$ for the unique index in $I$
such that $\mathcal{Z}^{i_0}(B(0,\ve)^c) >0$. Then, still on the
event $\mathcal
{A}_\ve$, the measures $\int_0^\infty\,\mathrm{d}t\, X_t $
and $\mathcal{Z}^{i_0}$ coincide on $B(0,\ve)^c$. The conditional distribution
of $W^{i_0}$ knowing $\mathcal{A}_\ve$ is $\N_0(\cdot\mid\mathcal
{Z}(B(0,\ve
)^c)>0)$, and we
conclude that $\mathcal{Z}$ has a continuous density on $B(0,\ve)^c$,
$\N_0(\cdot\mid \mathcal{Z}(B(0,\ve)^c)>0)$ a.s. As this holds for
any $\ve
>0$, we obtain
that, $\N_0$ a.e., the random measure
$\mathcal{Z}$ has a continuous density on $\R^d\setminus\{0\}$. Via
a scaling argument,
the same property holds $\N^{(1)}_0$ a.s. This argument does not
exclude the possibility that
$\mathcal{Z}$ might have a singularity at $0$, but we can use the rerooting
invariance property (see \cite{Al}, Section~3.2 or \cite{LGW},
Section~2.3) to complete the proof.
According to this property, if under the measure $\N^{(1)}_0$
we pick a random point distributed according to $\mathcal{Z}$ and then
shift $\mathcal{Z}
$ so that this random point becomes the
origin of $\R^d$, the resulting random measure has the same
distribution as $\mathcal{Z}$. Consequently, we obtain that $\N
^{(1)}_0$ a.s.,
$\mathcal{Z}(\mathrm{d}x)$ a.e., the measure $\mathcal{Z}$ has a
continuous density on $\R
^d\setminus\{x\}$. It easily follows that
$\mathcal{Z}$ has a continuous density on $\R^d$, $\N^{(1)}_0$ a.s.,
and by
scaling again the same property
holds under $\N_0$.
\end{pf*}
Let us introduce the random closed set
\[
\mathcal{R}:= \{\wh W_s \dvtx 0\leq s \leq\gamma \}.
\]
Note that, by construction, $\mathcal{Z}$ is supported on $\mathcal
{R}$, and it
follows that,
for every $y\in\R^d\setminus\{x\}$,
\begin{equation}
\label{inclusion-hitting} \bigl\{\ell^y >0\bigr\} \subset\{y\in\mathcal{R}\},\qquad
\N_x\mbox{ a.e. or }\N ^{(1)}_x\mbox{ a.s.}
\end{equation}
\begin{proposition}
\label{hitting-point}
For every $y\in\R^d\setminus\{x\}$,
\[
\bigl\{\ell^y >0\bigr\} = \{y\in\mathcal{R}\},\qquad \N_x
\mbox{ a.e. and }\N ^{(1)}_x\mbox{ a.s.}
\]
\end{proposition}
\begin{pf}
Fix $y\in\R^d$, and consider the function
$u(x):=\N_x(\ell^y>0)$, for every $x\in\R^d\setminus\{y\}$. By
simple scaling
and rotational invariance
arguments (see the proof of Proposition V.9(i) in \cite{Zurich} for a
similar argument), we have
\[
u(x)=C_d|x-y|^{-2}
\]
with a certain constant $C_d>0$ depending only on $d$.
On the other hand, an easy application of
the special Markov property \cite{LG0} shows that, for
every $r>0$, and every $x\in B(y,r)^c$, we have
\[
u(x)=\N_x \biggl[1-\exp \biggl(-\int X^{B(y,r)^c}(\mathrm{d}z)
u(z) \biggr) \biggr],
\]
where $X^{B(y,r)^c}$ stands for the exit measure of the Brownian snake
from the
open set $B(y,r)^c$. Theorem V.4 in \cite{Zurich} now shows that
the function $u$ must solve the partial differential
equation $\Delta u= 4 u^2$ in $\R^d\setminus\{y\}$. It
easily follows that $C_d=2 - d/2$.
The preceding line of reasoning
also applies to the function $v(x):= \N_x(y\in\mathcal{R})$
(see \cite{Zurich}, page 91), and shows
that we have $v(x)=(2-d/2)|x-y|^{-2}= u(x)$
for every $x\in\R^d\setminus\{y\}$---note that this formula for $v$
can also be
derived from \cite{DIP}, Theorem~1.3 and the connection between the
Brownian snake and
super-Brownian motion. Recalling \eqref{inclusion-hitting}, this is
enough to conclude
that
\begin{equation}
\label{hitting-tech0} \bigl\{\ell^y >0\bigr\} = \{y\in\mathcal{R}\},\qquad
\N_x\mbox{ a.e.}
\end{equation}
for every $x\in\R^d\setminus\{y\}$.
We now want to obtain that the equality in \eqref{hitting-tech0} also
holds $\N^{(1)}_x$ a.s.
Note that, for every fixed $x$, we could use a scaling argument to get that
$\{\ell^y >0\} = \{y\in\mathcal{R}\}$, $\N^{(1)}_x$ a.s., for
$\lambda
_d$ a.e. $y\in\R^d$, where we recall that
$\lambda_d$ stands for Lebesgue measure on~$\R^d$. In order
to get the more precise assertion of the proposition, we use a
different method.
By translation invariance, we may assume that $x=0$ and we fix $y\in\R
^d\setminus\{0\}$.
We set $T_y:=\inf\{s\geq0 \dvtx \wh W_s=y\}$. Also, for every $s>0$,
we set
\[
\wt\ell^y_s:= \liminf_{\ve\to0} \bigl(
\lambda_d\bigl(B(y,\ve)\bigr) \bigr)^{-1}\int
_0^s \,\mathrm{d}r\, \mathbf{1}_{\{|\wh W_r-y|\leq\ve\}}.
\]
Note that $\wt\ell^y_\gamma= \ell^y$, $\N_0$ a.e. and $\N
^{(1)}_0$ a.e.
We then claim that, for every $s>0$,
\begin{equation}
\label{hitting-tech1} \{T_y\leq s \} = \bigl\{\wt\ell^y_s
>0 \bigr\},\qquad \N_0\mbox{ a.e.}
\end{equation}
The inclusion $\{\wt\ell^y_s >0\}\subset\{T_y\leq s\} $ is obvious.
In order to prove the
reverse inclusion, we argue by contradiction and assume that
\[
\N_0\bigl(T_y\leq s, \wt\ell^y_s
=0\bigr) >0.
\]
Note that $\N_0(T_y=s)=0$ [because $\N_0(\wh W_s=y)=0$], and so we
have also
$\N_0(T_y< s,\wt\ell^y_s =0) >0$. For every $\eta>0$, let
\[
T^{(\eta)}_y:= \inf \bigl\{r\geq T_y \dvtx
\zeta_r\leq(\zeta _{T_y} - \eta)^+ \bigr\}.
\]
Notice that, by the properties of the Brownian snake, the path
$W_{T^{(\eta)}_y}$ is just
$W_{T_y}$ stopped at time $(\zeta_{T_y}-\eta)^+$.
From the strong Markov property at time $T_y$, we easily get that
$T^{(\eta)}_y\da T_y$
as $\eta\da0$, $\N_0$ a.e. on $\{T_y<\infty\}$. Hence, on the event
$\{
T_y<s\}$, we have also $T^{(\eta)}_y<s$
for $\eta$ small enough, $\N_0$ a.e. Therefore, we can find $\eta>0$
such that
\[
\N_0 \bigl(T_y< s,\wt\ell^y_{T^{(\eta)}_y}
=0 \bigr) >0.
\]
However, using the strong
Markov property at time $T^{(\eta)}_y$, and Lemma V.5 and Proposition
V.9(i) in \cite{Zurich}, we immediately
see that, conditionally on the past up to time $T^{(\eta)}_y$, the
event $\{\wh W_r\neq y, \forall r\geq T^{(\eta)}_y\}$
occurs with positive probability. Hence, we get
\[
\N_0\bigl(T_y<s, \wt\ell^y_\gamma=0
\bigr) >0.
\]
Since $\wt\ell^y_\gamma= \ell^y$, this contradicts \eqref
{hitting-tech0}, and this contradiction completes
the proof of our claim \eqref{hitting-tech1}.
Finally, we observe that, for every $s\in(0,1)$, the law of
$(W_r)_{0\leq r\leq s}$ under $\N^{(1)}_0$
is absolutely continuous with respect to the law of the same process
under $\N_0$ (this is
a straightforward consequence of the similar property for the It\^o
excursion measure and the
law of the normalized Brownian excursion; see, e.g., \cite{RY}, Chapter~XII). Hence, \eqref{hitting-tech1} also gives, for every
$s\in(0,1)$,
\[
\{T_y\leq s \} = \bigl\{\wt\ell^y_s >0
\bigr\},\qquad \N ^{(1)}_0\mbox{ a.s.},
\]
and the fact that the equality in \eqref{hitting-tech0} also holds $\N
^{(1)}_0$ a.s. readily follows.
\end{pf}
\section{Asymptotics for the range of tree-indexed random walk}\label{sec3}
Throughout this section, we consider only integers $n\geq1$ such that
$\Pi_\mu( \#\mathcal{T}=n)>0$
(and when we let $n\to\infty$, we mean along such values).
For every such integer $n$,
let $(\mathcal{T}_n,(Z^n(u))_{u\in\mathcal{T}_n})$ be distributed
according to
$\Pi^*_{\mu,\theta}(\cdot\mid \#\mathcal{T}=n)$. Then $\mathcal
{T}_n$ is a
Galton--Watson tree with offspring
distribution $\mu$ conditioned to have $n$ vertices, and conditionally
on $\mathcal{T}_n$,
$(Z^n(u))_{u\in\mathcal{T}_n}$ is a random walk with jump distribution
$\theta$
indexed by $\mathcal{T}_n$.
We set, for every $t> 0$ and $x\in\mathbb{R}^d$,
\[
p_t(x):= \frac{1}{(2\pi t)^{d/2} \sqrt{\operatorname{ det} M_\theta}} \exp \biggl( -\frac{x\cdot M_\theta^{-1}x}{2 t} \biggr),
\]
where $x\cdot y$ stands for the usual scalar product in $\R^d$.
For every $a\in\mathbb{Z}^d$, we also set
\[
L_n(a):= \sum_{u\in\mathcal{T}_n}
\mathbf{1}_{\{Z^n(u)=a\}}.
\]
\begin{lemma}
\label{contiTlocal}
For every $\ve>0$, there exists a constant $C_\ve$ such that, for
every $n$
and every $b\in\mathbb{Z}^d$ with $|b|\geq\ve n^{1/4}$,
\[
E \bigl[\bigl(L_n(b)\bigr)^2 \bigr] \leq
C_\ve n^{2-{d}/{2}}.
\]
Furthermore, for every $x,y\in\mathbb{R}^d\setminus\{0\}$, and for
every choice of the sequences $(x_n)$ and $(y_n)$ in $\mathbb{Z}^d$
such that $n^{-1/4}x_n\la x$ and $n^{-1/4}y_n\la y$ as $n\to\infty$,
we have
\[
\lim_{n\to\infty} n^{{d}/{2}-2} E \bigl[ L_n(x_n)
L_n(y_n) \bigr]=\varphi(x,y),
\]
where
\begin{eqnarray*}
\varphi(x,y)&:=& {\rho^4} \int_{(\mathbb{R}_+)^3}
\,\mathrm{d}r_1 \,\mathrm {d}r_2 \,\mathrm{d}r_3
(r_1+r_2+r_3)e^{-\rho^2(r_1+r_2+r_3)^2/2}\\
&&{}\times \int
_{\mathbb{R}^d} \,\mathrm{d}z\, p_{r_1}(z)p_{r_2}(x-z)p_{r_3}(y-z).
\end{eqnarray*}
The function $\varphi$ is continuous on $(\mathbb{R}^d\setminus\{0\})^2$.
\end{lemma}
\begin{remark*} The function $\varphi$ is in fact continuous on $(\R
^d)^2$. Since we will not need this result, we leave the
proof to the reader.
\end{remark*}
\begin{pf*}{Proof of Lemma \ref{contiTlocal}} We first establish the second assertion of the lemma. We let
$u^n_0,u^n_1,\ldots,u^n_{n-1}$ be the vertices of $\mathcal{T}_n$
listed in lexicographical order. By definition,
\[
L_n(x_n)=\sum_{i=0}^{n-1}
\mathbf{1}_{\{Z^n(u^n_i)=x_n\}},
\]
so that
\[
E \bigl[L_n(x_n)L_n(y_n) \bigr]
=E \Biggl[ \sum_{i=0}^{n-1}\sum
_{j=0}^{n-1} \mathbf{1}_{\{Z^n(u^n_i)=x_n, Z^n(u^n_j)=y_n\}} \Biggr].
\]
Let $H^n$ be the height function of the tree $\mathcal{T}_n$, so that
$H^n_i=|u^n_i|$ for every
$i\in\{0,1,\ldots,n-1\}$. If $i,j\in\{0,1,\ldots,n-1\}$, we also use
the notation $\check H^n_{i,j}=|u^n_i\wedge u^n_j|$ for the generation
of the most
recent common ancestor to $u^n_i$ and $u^n_j$, and note that
\begin{equation}
\label{MRCA} \Bigl|\check H^n_{i,j} - \min
_{i\wedge j\leq k\leq i\vee j} H^n_k \Bigr| \leq1.
\end{equation}
Write $\pi_k=\theta^{*k}$ for the transition kernels of
the random walk with jump distribution $\theta$.
By conditioning with respect to the tree $\mathcal{T}_n$, we get
\begin{eqnarray}
\label{Tlocal1}&& E \bigl[L_n(x_n)L_n(y_n)
\bigr]\nonumber\\
&&\qquad=E \Biggl[ \sum_{i=0}^{n-1}\sum
_{j=0}^{n-1} \sum
_{a\in\mathbb{Z}^d} \pi_{\check H^n_{i,j}}(a) \pi_{H^n_i-\check H^n_{i,j}}(x_n-a)
\pi _{H^n_j-\check H^n_{i,j}}(y_n-a) \Biggr]
\\
&&\qquad= n^2 E \biggl[\int_0^1\int
_0^1 \,\mathrm{d}s \,\mathrm{d}t\, \Phi
^n_{x_n,y_n} \bigl(H^n_{\lfloor ns\rfloor},H^n_{\lfloor nt\rfloor},
\check H^n_{\lfloor ns\rfloor,\lfloor nt\rfloor} \bigr) \biggr],\nonumber
\end{eqnarray}
where we have set, for every integers $k,\ell,m\geq0$ such that
$k\wedge\ell\geq m$,
\[
\Phi^n_{x_n,y_n}(k,\ell,m):= \sum
_{a\in\mathbb{Z}^d}\pi_m(a) \pi _{k-m}(x_n-a)
\pi_{\ell-m}(y_n-a).
\]
In the remaining part of the proof, we assume that $\theta$ is
aperiodic [meaning that the
subgroup generated by $\{k\geq0\dvtx \pi_k(0)>0\}$ is $\Z$]. Only
minor modifications are
needed to treat the general case. We can then use
the local limit theorem, in a form that can be obtained by combining
Theorems 2.3.9 and 2.3.10
in \cite{LL}. There exists a sequence $\delta_n$ converging to $0$ such
that, for every $n\geq1$,
\begin{equation}
\label{LLT} \sup_{a\in\mathbb{Z}^d} \biggl( \biggl(1 +
\frac{|a|^2}{n} \biggr) n^{d/2} \bigl|\pi_n(a)-
p_n(a) \bigr| \biggr) \leq\delta_n.
\end{equation}
Let $(k_n),(\ell_n),(m_n)$ be three sequences of positive integers such
that $n^{-1/2}k_n\rightarrow u$,
$n^{-1/2}\ell_n\rightarrow v$ and $n^{-1/2}m_n\rightarrow w$, where
$0<w<u\wedge v$. Write
\begin{eqnarray*}
&&n^{d/2} \Phi^n_{x_n,y_n}(k_n,
\ell_n,m_n)\\
&&\qquad = n^{3d/4} \int_{\mathbb{R}^d}
\,\mathrm{d}z\, \pi_{m_n}\bigl(\bigl\lfloor zn^{1/4}\bigr\rfloor
\bigr) \pi_{k_n-m_n}\bigl(x_n-\bigl\lfloor zn^{1/4}
\bigr\rfloor\bigr)\pi_{\ell
_n-m_n}\bigl(y_n-\bigl\lfloor
zn^{1/4}\bigr\rfloor\bigr),
\end{eqnarray*}
and note that, for every fixed $z\in\mathbb{R}^d$,
\begin{eqnarray*}
\lim_{n\to\infty} n^{d/4} \pi_{m_n}\bigl(\bigl
\lfloor zn^{1/4}\bigr\rfloor\bigr)&=& p_w(z),
\\
\lim_{n\to\infty} n^{d/4} \pi_{k_n-m_n}
\bigl(x_n-\bigl\lfloor zn^{1/4}\bigr\rfloor\bigr)&=&
p_{u-w}(x-z),
\\
\lim_{n\to\infty} n^{d/4} \pi_{\ell_n-m_n}
\bigl(y_n-\bigl\lfloor zn^{1/4}\bigr\rfloor \bigr)&=&
p_{v-w}(y-z),
\end{eqnarray*}
by \eqref{LLT}. These convergences even hold uniformly in $z$. It then
follows that
\begin{eqnarray}
\label{Tlocal2} \lim_{n\to\infty} n^{d/2}
\Phi^n_{x_n,y_n}(k_n,\ell_n,m_n)
&=& \int_{\mathbb{R}^d}\, \mathrm{d}z\, p_w(z)p_{u-w}(x-z)p_{v-w}(y-z)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=:& \Psi_{x,y}(u,v,w).
\end{eqnarray}
Indeed, using \eqref{LLT} again, we have, for every
$K>2(|x|\vee|y|)+2$ and every sufficiently large $n$,
\begin{eqnarray*}
&&n^{3d/4} \int_{\{|z|\geq K+1\}} \,\mathrm{d}z\, \pi_{m_n}
\bigl(\bigl\lfloor zn^{1/4}\bigr\rfloor\bigr) \pi_{k_n-m_n}
\bigl(x_n-\bigl\lfloor zn^{1/4}\bigr\rfloor\bigr)
\pi_{\ell
_n-m_n}\bigl(y_n-\bigl\lfloor zn^{1/4}\bigr
\rfloor\bigr)
\\
&&\qquad \leq C \int_{\{|z|\geq K+1\}} \,\mathrm{d}z \biggl(\frac
{1}{(|z|-1)^2}
\biggr)^3,
\end{eqnarray*}
with a constant $C$ independent of $n$ and $K$. The right-hand side of
the last display
tends to $0$ as $K$ tends to infinity. Together with the previously
mentioned uniform
convergence, this suffices to justify \eqref{Tlocal2}.
By \cite{LG1}, Theorem~1.15, we have
\[
\biggl(\frac{\rho}{2} n^{-1/2} H^n_{\lfloor nt\rfloor}
\biggr)_{0\leq
t\leq
1} \build{\la}_{n\to\infty}^{\mathrm{(d)}} (
\mathbf{e}_t )_{0\leq t\leq1},
\]
where $(\mathbf{e}_t)_{0\leq t\leq1}$
is a normalized Brownian excursion, and we recall that $\rho^2$ is the
variance of $\mu$. The latter convergence holds in the sense of the
weak convergence of laws on the Skorokhod space $\mathbb{D}([0,1],\R
_{+})$ of c\`adl\`ag functions from $[0,1]$
into~$\R_+$. Using the Skorokhod representation theorem,
we may and will assume that this convergence holds almost surely, uniformly
in $t\in[0,1]$. Recalling~\eqref{MRCA}, it follows that we have also
\[
\frac{\rho}{2} n^{-1/2} \check H^n_{\lfloor ns\rfloor,\lfloor
nt\rfloor}
\build{\la}_{n\to\infty}^{}\min_{s\wedge t\leq r\leq s\vee t} \mathbf
{e}_r =: m_\mathbf{e}(s,t),
\]
uniformly in $s,t\in[0,1]$, a.s.
As a consequence of \eqref{Tlocal2} and the preceding observations, we have,
for every $s,t\in(0,1)$ with $s\neq t$,
\begin{eqnarray}
\label{Tlocal3} &&\lim_{n\to\infty} n^{d/2}
\Phi^n_{x_n,y_n} \bigl(H^n_{\lfloor
ns\rfloor
},H^n_{\lfloor nt\rfloor},
\check H^n_{\lfloor ns\rfloor,\lfloor nt\rfloor} \bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = \Psi_{x,y} \biggl(
\frac{2}{\rho}\mathbf{e}_s,\frac{2}{\rho
}\mathbf
{e}_t,\frac{2}{\rho}m_\mathbf{e}(s,t) \biggr),\qquad
\mbox{a.s.}
\end{eqnarray}
We claim that we can deduce from \eqref{Tlocal1} and \eqref{Tlocal3} that
\begin{eqnarray}
\label{Tlocal4} &&\lim_{n\to\infty} n^{{d}/{2}-2} E \bigl[
L_n(x_n) L_n(y_n) \bigr]
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad= E
\biggl[ \int_0^1\int_0^1
\,\mathrm{d}s \,\mathrm{d}t\, \Psi_{x,y} \biggl(\frac{2}{\rho}
\mathbf{e}_s,\frac{2}{\rho}\mathbf{e}_t,
\frac
{2}{\rho
}m_\mathbf{e}(s,t) \biggr) \biggr].
\end{eqnarray}
Note that the right-hand side of \eqref{Tlocal4} coincides with the
function $\varphi(x,y)$
in the lemma. To see this, we can use Theorem III.6 of \cite{Zurich} to
verify that the joint density of the triple
\[
\bigl(m_\mathbf{e}(s,t), \mathbf{e}_s-m_\mathbf{e}(s,t),
\mathbf {e}_t-m_\mathbf{e}(s,t) \bigr)
\]
when $s$ and $t$ are chosen uniformly over $[0,1]$, independently and
independently of $\mathbf{e}$,
is
\[
16(r_1+r_2+r_3)\exp\bigl(-2(r_1+r_2+r_3)^2
\bigr).
\]
So the proof of the second assertion will be complete if we can justify
\eqref{Tlocal4}. By Fatou's lemma,
\eqref{Tlocal1} and \eqref{Tlocal3},
we have first
\[
\liminf_{n\to\infty} n^{{d}/{2}-2} E \bigl[ L_n(x_n)
L_n(y_n) \bigr] \geq E \biggl[ \int_0^1
\int_0^1 \,\mathrm{d}s \,\mathrm{d}t\, \Psi
_{x,y} \biggl(\frac{2}{\rho}\mathbf{e}_s,
\frac{2}{\rho}\mathbf{e}_t,\frac
{2}{\rho
}m_\mathbf{e}(s,t)
\biggr) \biggr].
\]
Furthermore, dominated convergence shows that, for every $K>0$,
\begin{eqnarray*}
&&\lim_{n\to\infty} E \biggl[\int_0^1
\int_0^1 \,\mathrm{d}s \,\mathrm{d}t
\bigl(n^{d/2}\Phi ^n_{x_n,y_n}\bigl(H^n_{\lfloor ns\rfloor},H^n_{\lfloor nt\rfloor},
\check H^n_{\lfloor ns\rfloor,\lfloor nt\rfloor}\bigr)\wedge K \bigr) \biggr]
\\
&&\qquad = E \biggl[\int_0^1\int
_0^1 \,\mathrm{d}s \,\mathrm{d}t \biggl(\Psi
_{x,y} \biggl(\frac{2}{\rho}\mathbf{e}_s,
\frac{2}{\rho}\mathbf {e}_t,\frac
{2}{\rho}m_\mathbf{e}(s,t)
\biggr) \wedge K \biggr) \biggr].
\end{eqnarray*}
Write $\Gamma_n(s,t)= n^{d/2}\Phi^n_{x_n,y_n}(H^n_{\lfloor ns\rfloor
},H^n_{\lfloor nt\rfloor}, \check
H^n_{\lfloor ns\rfloor,\lfloor nt\rfloor})$ to simplify notation. In
view of the preceding comments, it will be enough to verify that
\begin{equation}
\label{Tlocal-key} \lim_{K\to\infty} \biggl( \limsup
_{n\to\infty} E \biggl[\int_0^1\int
_0^1 \,\mathrm{d}s \,\mathrm{d}t\,
\Gamma_n(s,t) \mathbf{1}_{\{\Gamma_n(s,t)>K\}} \biggr] \biggr) =0.
\end{equation}
To this end, we will make use of the bound
\begin{equation}
\label{boundpi} \sup_{k\geq0} \pi_k(x)\leq M
\bigl(|x|^{-d}\wedge1\bigr),
\end{equation}
which holds for every $x\in\mathbb{Z}^d$ with a constant $M$
independent of $x$. This bound can be
obtained easily by combining \eqref{LLT} and Proposition~2.4.6 in
\cite{LL}. Then let $k,\ell,m\geq0$
be integers such that $k\wedge\ell\geq m$, and recall that
\[
\Phi^n_{x_n,y_n}(k,\ell,m)= \sum_{a\in\mathbb{Z}^d}
\pi_m(a) \pi _{k-m}(x_n-a)
\pi_{\ell-m}(y_n-a).
\]
Fix $\ve>0$ such that $|x|\wedge|y|>2\ve$.
Consider first the contribution to the sum in the right-hand side
coming from values of
$a$ such that $|a|\leq\ve n^{1/4}$. For such values of~$a$ (and
assuming that $n$ is large enough), the estimate
\eqref{boundpi} allows us to bound both $\pi_{k-m}(x_n-a)$ and $\pi
_{\ell-m}(y_n-a)$ by $M\ve^{-d}n^{-d/4}$.
On the other hand, if $|a|\geq\ve n^{1/4}$, we can bound $\pi_m(a)$ by
$M\ve^{-d}n^{-d/4}$, whereas
\eqref{LLT} shows that
the sum
\[
\sum_{|a|\geq\ve n^{1/4}} \pi_{k-m}(x_n-a)
\pi_{\ell-m}(y_n-a)
\]
is bounded above by $c_1((k-m)^{-d/2}\wedge(\ell-m)^{-d/2}\wedge1)$
for some constant $c_1$. Summarizing, we get the bound
\begin{eqnarray*}
&&\Phi^n_{x_n,y_n}(k,\ell,m)\\
&&\qquad\leq M^2
\ve^{-2d}n^{-d/2} + c_1M\ve ^{-d}n^{-d/4}
\bigl((k-m)^{-d/2}\wedge(\ell-m)^{-d/2}\wedge1\bigr)
\\
&&\qquad\leq c_{1,\ve}n^{-d/2} + c_{2,\ve}n^{-d/4}
\bigl((k+\ell-2m)^{-d/2} \wedge1\bigr),
\end{eqnarray*}
where $c_{1,\ve}$ and $c_{2,\ve}$ are constants that do not depend on
$n,k,\ell,m$. Then observe that, for every
$s,t\in(0,1)$,
\[
H^n_{\lfloor ns\rfloor}+H^n_{\lfloor nt\rfloor}-2 \check
H^n_{\lfloor ns\rfloor,\lfloor nt\rfloor} = d_n \bigl(u^n_{\lfloor
ns\rfloor},u^n_{\lfloor nt\rfloor}
\bigr),
\]
where $d_n$ denotes the usual graph distance on $\mathcal{T}_n$. From
the preceding bound, we thus get
\[
\Gamma_n(s,t) \leq c_{1,\ve} + c_{2,\ve}n^{d/4}
\bigl(d_n\bigl(u^n_{\lfloor
ns\rfloor},u^n_{\lfloor nt\rfloor}
\bigr)^{-d/2}\wedge1 \bigr).
\]
It follows that, for every $K>0$,
\begin{eqnarray*}
&&\int_0^1\int_0^1
\,\mathrm{d}s \,\mathrm{d}t\, \Gamma_n(s,t) \mathbf {1}_{\{\Gamma_n(s,t)>c_{1,\ve}+ c_{2,\ve}K\}}
\\
&&\qquad \leq\int_0^1\int_0^1
\,\mathrm{d}s \,\mathrm{d}t \bigl(c_{1,\ve} + c_{2,\ve}n^{d/4}
\bigl(d_n\bigl(u^n_{\lfloor ns\rfloor},u^n_{\lfloor nt\rfloor
}
\bigr)^{-d/2}\wedge1\bigr) \bigr) \\
&&\qquad\quad{}\times\mathbf{1}_{ \{ n^{d/4}d_n(u^n_{\lfloor ns\rfloor},u^n_{\lfloor
nt\rfloor})^{-d/2} >K \}}
\\
&&\qquad = n^{-2} \sum_{u,v\in\mathcal{T}_n}
\bigl(c_{1,\ve} + c_{2,\ve
}n^{d/4} \bigl(d_n(u,v)^{-d/2}
\wedge1\bigr) \bigr) \mathbf{1}_{ \{ d_n(u,v)< K^{-2/d} n^{1/2} \}}.
\end{eqnarray*}
By an estimate found in Theorem~1.3 of \cite{DJ}, there exists a
constant $c_0$ that only depends
on $\mu$, such that, for every integer $k\geq1$,
\begin{equation}
\label{estidistance} E \bigl[\#\bigl\{(u,v)\in\mathcal{T}_n\times
\mathcal{T}_n\dvtx d_n(u,v)=k\bigr\} \bigr] \leq
c_0kn.
\end{equation}
It then follows that
\begin{eqnarray*}
&&E \biggl[\int_0^1\int_0^1
\,\mathrm{d}s \,\mathrm{d}t\, \Gamma_n(s,t) \mathbf{1}_{\{\Gamma_n(s,t)>c_{1,\ve}+ c_{2,\ve}K\}}
\biggr]
\\
&&\qquad \leq n^{-1}\bigl(c_{1,\ve} + c_{2,\ve}n^{d/4}
\bigr) + c_0 n^{-1} \sum_{k=1}^{\lfloor K^{-2/d} n^{1/2}\rfloor}
k\bigl(c_{1,\ve} + c_{2,\ve} n^{d/4} k^{-d/2}
\bigr).
\end{eqnarray*}
It is now elementary to verify that
the right-hand side of the preceding display has a limit $g(K)$ when
$n\to\infty$, and that $g(K)$ tends to $0$ as $K\to\infty$ (note that
we use the fact
that $d\leq3$). This completes the proof
of \eqref{Tlocal-key} and of the second assertion of the lemma.
The proof of the first assertion is similar and easier. We first note that
\[
E \bigl[L_n(b)^2 \bigr]=E \biggl[ \sum
_{u,v\in\mathcal{T}_n} \Phi ^n_{b,b}\bigl(|u|,|v|,|u\wedge
v|\bigr) \biggr],
\]
where the function $ \Phi^n_{b,b}$ is defined as above. Then, assuming that
$|b|\geq2\ve n^{1/4}$, the same arguments as in the first part of the
proof give the
bound
\[
\Phi^n_{b,b}\bigl(|u|,|v|,|u\wedge v|\bigr)\leq c_{1,\ve}
n^{-d/2} + c_{2,\ve} n^{-d/4} \bigl(d_n(u,v)^{-d/2}
\wedge1\bigr).
\]
By summing over all choices of $u$ and $v$, it follows that
\begin{eqnarray*}
&&E \bigl[L_n(b)^2 \bigr]\\
&&\qquad\leq c_{1,\ve}n^{2-d/2}
\\
&&\qquad\quad{}+ c_{2,\ve
}n^{-d/4} \biggl( n + E \biggl[\sum
_{u,v\in\mathcal{T}_n, 1\leq d_n(u,v)\leq\sqrt{n}} d_n(u,v)^{-d/2} \biggr] +
n^2\times n^{-d/4} \biggr)
\\
&&\qquad\leq(c_{1,\ve}+2c_{2,\ve})n^{2-d/2}\\
&&\qquad\quad{} +
c_{2,\ve}n^{-d/4} \sum_{k=1}^{\lfloor\sqrt{n}\rfloor}
k^{-d/2} E \bigl[\#\bigl\{(u,v)\in\mathcal{T}_n\dvtx
d_n(u,v)=k\bigr\} \bigr],
\end{eqnarray*}
and the bound stated in the first assertion easily follows from \eqref
{estidistance}.
Let us finally establish the continuity of $\varphi$. We fix $\ve>0$ and
verify that $\varphi$ is continuous on the set $\{|x|\geq2 \ve,|y|\geq
2 \ve\}$. We split
the integral in $\mathrm{d}z$ in two parts:
\begin{longlist}[--]
\item[--] The integral over $|z|\leq\ve$. Write $\varphi_{1,\ve}(x,y)$
for the contribution of this integral. We observe that,
if $|z|\leq\ve$, the function $x\mapsto p_{r_2}(x-z)$ is Lipschitz
uniformly in $z$ and in $r_2$ on the set
$\{|x|\geq2 \ve\}$, and a similar property holds for the function
$y\mapsto p_{r_3}(y-z)$. It follows that
$\varphi_{1,\ve}$ is a Lipschitz function of $(x,y)$ on the set $\{
|x|\geq2 \ve,|y|\geq2 \ve\}$.
\item[--] The integral over $|z|>\ve$. Write $\varphi_{2,\ve}(x,y)$
for the contribution of this integral. Note that
if $(u_n,v_n)_{n\geq1}$ is a sequence in $\mathbb{R}^d\times\mathbb
{R}^d$ such that $|u_n|\wedge|v_n|\geq2\ve$ for every $n$, and
$(u_n,v_n)$ converges to $(x,y)$
as $n\to\infty$, we have, for every fixed $r_1,r_2,r_3>0$,
\begin{eqnarray*}
&&\int_{\{|z|>\ve\}} \,\mathrm{d}z\, p_{r_1}(z)p_{r_2}(u_n-z)p_{r_3}(v_n-z)
\\
&&\qquad\build\la_{n\to
\infty}^{} \int_{\{|z|>\ve\}}
\,\mathrm{d}z\, p_{r_1}(z)p_{r_2}(x-z)p_{r_3}(y-z).
\end{eqnarray*}
We can then use dominated convergence, since there exist constants
$c_\ve$ and $\widetilde c_\ve$
that depend only on $\ve$, such that
\[
\int_{\{|z|>\ve\}} \,\mathrm{d}z\, p_{r_1}(z)p_{r_2}(u_n-z)p_{r_3}(v_n-z)
\leq c_\ve p_{r_2+r_3}(u_n-v_n) \leq
\widetilde c_\ve(r_2+r_3)^{-d/2},
\]
and the right-hand side is integrable for the measure
$(r_1+r_2+r_3)\times\break e^{-\rho^2(r_1+r_2+r_3)^2/2}\,\mathrm{d}r_1\,\mathrm
{d}r_2\,\mathrm{d}r_3$.
It follows that $\varphi_{2,\ve}$ is also continuous on the set $\{
|x|\geq2 \ve,|y|\geq2 \ve\}$.
\end{longlist}
The preceding considerations complete the proof.
\end{pf*}
In what follows, we use the notation $W^{(1)}=(W^{(1)}_s)_{0\leq s\leq1}$
for a process distributed according to $\N^{(1)}_0$. We recall a result
of Janson and Marckert \cite{JM} that will play an important role below.
As in the proof of Lemma~\ref{contiTlocal}, we let $u^n_0,u^n_1,\ldots,u^n_{n-1}$ be the vertices of $\mathcal{T}_n$
listed in lexicographical order. For every $j\in\{0,1,\ldots,n-1\}$
write $Z^n_j=Z^n(u^n_j)$ for the spatial location of $u^n_j$, and $Z^n_n=0$
by convention. Recalling our assumption \eqref{hypoJM}, we get from
\cite{JM}, Theorem~2, that
\begin{equation}
\label{JaMa} \biggl(\sqrt{\frac{\rho}{2}} n^{-1/4}
Z^n_{\lfloor nt\rfloor} \biggr)_{0\leq t\leq1} \build{
\la}_{n\to\infty}^{\mathrm{(d)}} \bigl(M_\theta ^{1/2}
\widehat W^{(1)}_t \bigr)_{0\leq t\leq1},
\end{equation}
where as usual $M_\theta^{1/2}$ is the unique positive definite
symmetric matrix such that $M_\theta=(M_\theta^{1/2})^2$,
and the convergence holds in distribution in the Skorokhod space
$\mathbb{D}([0,1],\mathbb{R}^{d})$. Note that there are
two minor differences between \cite{JM} and the present setting. First,
\cite{JM} considers one-dimensional labels, whereas our
spatial locations take values in $\Z^d$. However, we can simply project
$Z_n(u)$ on the coordinate axes to get
tightness in the convergence \eqref{JaMa} from the results of \cite
{JM}, and convergence of finite-dimensional marginals
is easy just as in \cite{JM}, Proof of Theorem~1. Second, the
``discrete snake'' of \cite{JM} lists the labels encountered
when exploring the tree $\mathcal{T}_n$ in depth first traversal (or contour
order), whereas we are here enumerating
the vertices in lexicographical order. Nevertheless, the very same
arguments that are used to relate the contour process and the height
function of a random tree (see~\cite{MaMo} or~\cite{LG1}, Section~1.6)
show that asymptotics for the discrete snakes of \cite{JM}
imply similar asymptotics for the labels listed in lexicographical
order of vertices.
In the next theorem, the notation $(l^x,x\in\R^d)$ stands for the
collection of local times of
$W^{(1)}$, which are defined as the continuous density of the
occupation measure of $W^{(1)}$ as in Proposition~\ref{densityZ}.
We define a constant $c>0$ by setting
\begin{equation}
\label{constant-c} c:= \frac{1}{\sigma} \sqrt{\frac{\rho}{2}},
\end{equation}
where $\sigma^2=(\operatorname{ det} M_\theta)^{1/d}$ as previously. We also use
the notation $M_\theta^{-1/2}=(M_\theta^{1/2})^{-1}$.
\begin{theorem}
\label{convLT}
Let $x^1,\ldots,x^p \in\mathbb{R}^d\setminus\{0\}$, and let
$(x^1_n),\ldots, (x^p_n)$
be sequences in $\mathbb{Z}^d$ such that $\sqrt{\frac{\rho}{2}}
n^{-1/4} M_\theta^{-1/2}x^j_n\la x^j$ as $n\to\infty$, for every
$1\leq j\leq p$. Then
\[
\bigl(n^{{d}/{4}-1}L_n\bigl(x^1_n\bigr),
\ldots, n^{
{d}/{4}-1}L_n\bigl(x^p_n\bigr)
\bigr)\build{\la}_{n\to\infty}^{\mathrm{(d)}} \bigl(c^dl^{x^1},
\ldots,c^dl^{x^p} \bigr),
\]
where the
constant $c$ is given by \eqref{constant-c}.
\end{theorem}
\begin{remarks*}
(i) As mentioned in the \hyperref[sec1]{Introduction}, this result should
be compared with Theorem~1 in \cite{LZ2}, which deals with local
times of branching random walk in $\Z^d$ for $d=2$ or $3$. See also
\cite{BMJ}, Theorem~3.6 and \cite{DJ}, Theorem~1.1, for stronger versions
of the convergence in Theorem~\ref{convLT} when $d=1$.
(ii) It is likely that the result of Lemma~\ref{contiTlocal}
still holds when $x=0$ or $y=0$, and then the condition $x^i\neq0$
in the preceding theorem could
be removed, using also the remark after Lemma~\ref{contiTlocal}.
Proving this reinforcement of Lemma~\ref{contiTlocal} would however
require additional technicalities. Since this extension is not needed
in the proof of our main results, we will not
address this problem here.
\end{remarks*}
\begin{pf*}{Proof of Theorem \ref{convLT}}
To simplify the presentation, we give the details of the proof only in
the isotropic case where
$M_\theta=\sigma^2 \operatorname{ Id}$ (the nonisotropic case is treated in
exactly the same manner at the
cost of a somewhat heavier notation). Our condition on the sequences
$(x^j_n)$ then just says that $c n^{-1/4} x^j_n\la x^j$ as $n\to\infty$.
By the Skorokhod
representation theorem, we may and will assume that the convergence
\eqref{JaMa} holds a.s. To obtain the result of the
theorem, it is then enough to verify that, if $x \in\mathbb
{R}^d\setminus\{0\}$ and $(x_n)$ is a sequence
in $\mathbb{Z}^d$ such that $c n^{-1/4} x_n\la x$ as $n\to\infty$,
we have
\begin{equation}
\label{convLTtech1} n^{{d}/{4}-1}L_n(x_n) \build{
\la}_{n\to\infty}^{\mathrm{(P)}} c^d l^x.
\end{equation}
To this end, fix $x$ and the sequence $(x_n)$, and for every $\ve\in
(0,|x|)$, let $g_\ve$
be a nonnegative continuous function on $\mathbb{R}^d$, with compact
support contained in the open ball of radius $\ve$
centered at $x$, and such that
\[
\int_{\R^{d}} g_\ve(y) \,\mathrm{d}y = 1.
\]
It follows from \eqref{JaMa} (which we assume to hold a.s.) that, for
every fixed $\ve\in(0,|x|)$,
\[
\int_0^1 g_\ve\bigl(c
n^{-1/4} Z^n_{\lfloor nt\rfloor}\bigr) \,\mathrm{d}t \build {
\la}_{n\to\infty}^{\mathrm{ a.s.}} \int_0^1
g_\ve\bigl(\widehat W^{(1)}_t\bigr)\,
\mathrm{d}t.
\]
Furthermore,
\[
\int_0^1 g_\ve\bigl(\widehat
W^{(1)}_t\bigr) \,\mathrm{d}t =\int_{\R^{d}}
g_\ve (y) l^y \,\mathrm{d}y \build{\la}_{\ve\to0}^{\mathrm{ a.s.}}
l^x,
\]
by the continuity of local times.
Let $\delta>0$. By combining the last two convergences, we can find
$\ve
_1\in(0,|x|)$
such that, for every $\ve\in(0,\ve_1)$, there exists an integer
$n_1(\ve
)$ so that for every $n\geq n_1(\ve)$,
\begin{equation}
\label{convLTtech2} P \biggl( \biggl|\int_0^1
g_\ve\bigl(c n^{-1/4} Z^n_{\lfloor nt\rfloor}\bigr)\,
\mathrm{d}t - l^x \biggr| > \delta \biggr) <\delta.
\end{equation}
However, we have
\begin{eqnarray*}
\int_0^1 g_\ve
\bigl(cn^{-1/4}Z^n_{\lfloor nt\rfloor}\bigr)\, \mathrm{d}t &=&
\frac
{1}{n} \sum_{a\in\mathbb{Z}^d} g_\ve
\bigl(cn^{-1/4}a\bigr) L_n(a)
\\
&=& n^{{d}/{4} -1} \int_{\mathbb{R}^d} g_\ve
\bigl(cn^{-1/4}\bigl\lfloor n^{1/4}y\bigr\rfloor\bigr)
L_n\bigl(\bigl\lfloor n^{1/4}y\bigr\rfloor\bigr)\,
\mathrm{d}y.
\end{eqnarray*}
Set
\[
\eta_n(\ve):= \int_{\mathbb{R}^d} g_\ve
\bigl(cn^{-1/4}\bigl\lfloor n^{1/4}y\bigr\rfloor\bigr)
\,\mathrm{d}y
\]
and note that
\[
\eta_n(\ve)\build{\la}_{n\to\infty}^{} \int
_{\mathbb{R}^d} g_\ve(cy) \,\mathrm{d}y = c^{-d}.
\]
By the Cauchy--Schwarz inequality,
\begin{eqnarray*}
&&E \biggl[ \biggl(\int_0^1 g_\ve
\bigl(cn^{-1/4}Z^n_{\lfloor nt\rfloor}\bigr) \,\mathrm {d}t -
\eta_n(\ve) n^{{d}/{4}-1} L_n(x_n)
\biggr)^2 \biggr]
\\
&&\qquad =E \biggl[ \biggl(n^{{d}/{4}-1} \int_{\mathbb{R}^d}
g_\ve \bigl(cn^{-1/4}\bigl\lfloor n^{1/4}y\bigr
\rfloor\bigr) \bigl(L_n\bigl(\bigl\lfloor n^{1/4}y\bigr
\rfloor\bigr) -L_n(x_n)\bigr)\, \mathrm{d}y
\biggr)^2 \biggr]
\\
& &\qquad\leq\eta_n(\ve)\times n^{{d}/{2}-2} \int_{\mathbb{R}^d}
\,\mathrm{d}y\, g_\ve\bigl(cn^{-1/4}\bigl\lfloor
n^{1/4}y\bigr\rfloor\bigr) E \bigl[\bigl(L_n\bigl(\bigl
\lfloor n^{1/4}y\bigr\rfloor\bigr) -L_n(x_n)
\bigr)^2 \bigr].
\end{eqnarray*}
Using the first assertion of Lemma~\ref{contiTlocal}, one easily gets
that, for every fixed $\ve\in(0,|x|)$,
\[
n^{{d}/{2}-2} \int_{\mathbb{R}^d} \,\mathrm{d}y \bigl|g_\ve
\bigl(cn^{-1/4}\bigl\lfloor n^{1/4}y\bigr\rfloor
\bigr)-g_\ve(cy)\bigr | E \bigl[\bigl(L_n\bigl(\bigl\lfloor
n^{1/4}y\bigr\rfloor\bigr) -L_n(x_n)
\bigr)^2 \bigr] \build{\la}_{n\to\infty}^{} 0.
\]
On the other hand, by the second assertion of the lemma,
\begin{eqnarray*}
&&n^{{d}/{2}-2} \int_{\mathbb{R}^d} \,\mathrm{d}y\, g_\ve(cy)
E \bigl[\bigl(L_n\bigl(\bigl\lfloor n^{1/4}y\bigr\rfloor
\bigr) -L_n(x_n)\bigr)^2 \bigr]\\
&&\qquad \build{
\la}_{n\to\infty}^{} \int_{\mathbb{R}^d}\, \mathrm{d}y\,
g_\ve(cy) \biggl(\varphi(y,y)-2\varphi\biggl(\frac
{x}{c},y
\biggr)+\varphi\biggl(\frac{x}{c},\frac{x}{c}\biggr) \biggr).
\end{eqnarray*}
If $\gamma_\ve$ stands for the limit in the last display, the
continuity of $\varphi$
ensures that $\gamma_\ve$ tends to $0$ as $\ve\to0$.
From the preceding considerations, we have
\[
\limsup_{n\to\infty} E \biggl[ \biggl( \int_0^1
g_\ve \bigl(cn^{-1/4}Z^n_{\lfloor
nt\rfloor}\bigr)\,
\mathrm{d}t - \eta_n(\ve) n^{{d}/{4}-1} L_n(x_n)
\biggr)^2 \biggr] \leq c^{-d} \gamma_\ve.
\]
Hence, we can find $\ve_2\in(0,|x|)$ small enough so that, for every
$\ve\in(0,\ve_2)$, there exists an integer
$n_2(\ve)$ such that, for every $n\geq n_2(\ve)$,
\begin{equation}
\label{convLTtech3} P \biggl(\biggl | \int_0^1
g_\ve\bigl(cn^{-1/4}Z^n_{\lfloor nt\rfloor}\bigr)\,
\mathrm {d}t - \eta_n(\ve) n^{{d}/{4}-1} L_n(x_n)
\biggr| > \delta \biggr) <\delta.
\end{equation}
By combining \eqref{convLTtech2} and \eqref{convLTtech3}, we see that,
for every $\ve\in(0,\ve_1\wedge\ve_2)$
and $n\geq n_1(\ve)\vee n_2(\ve)$,
\[
P \bigl( \bigl|\eta_n(\ve) n^{{d}/{4}-1} L_n(x_n)
- l^x \bigr| > 2\delta \bigr) < 2\delta.
\]
Our claim \eqref{convLTtech1} easily follows, since $\eta_n(\ve)$ tends
to $c^{-d}$ as $n\to\infty$.
\end{pf*}
Set $R_n=\#\{ Z^n(u) \dvtx u\in\mathcal{T}_n\}$. Recall the constant
$c$ from \eqref{constant-c}, and also recall that $\lambda_d$
denotes Lebesgue measure on $\R^d$.
\begin{theorem}
\label{conv-range}
We have
\[
n^{-d/4} R_n \build{\la}_{n\to\infty}^{\mathrm{(d)}}
c^{-d}\lambda _d(\mathcal{S}),
\]
where $\mathcal{S}$ stands for the support of ISE.
\end{theorem}
\begin{pf}
Again, for the sake of simplicity, we give details only in the
isotropic case $M_\theta=\sigma^2 {\mathrm {Id}}$.
From the definition of ISE, we may take $\mathcal{S}= \{\widehat
W^{(1)}_t \dvtx 0\leq t\leq1 \}$. We then set, for every $\ve>0$,
\[
\mathcal{S}_\ve:= \bigl\{x\in\R^d \dvtx \operatorname{dist}(x,
\mathcal{S})\leq \ve \bigr\}.
\]
As in the preceding proof, we may and will assume that the convergence
\eqref{JaMa} holds
almost surely. It then follows that, for every $\ve>0$,
\[
P \bigl(\bigl\{ cn^{-1/4} Z^n(u) \dvtx u\in
\mathcal{T}_n\bigr\}\subset\mathcal {S}_\ve \bigr) \build{
\la}_{n\to\infty}^{} 1.
\]
Fix $K>0$, and let $B(0,K)$ stand for the closed ball of radius $K$
centered at $0$ in~$\R^d$. Also set
$\mathcal{S}_\ve^{(K)}:= \mathcal{S}_\ve\cap B(0,K+\ve)$. It follows
that we have also
\[
P \bigl(\bigl(\bigl\{ Z^n(u) \dvtx u\in\mathcal{T}_n
\bigr\} \cap B\bigl(0, c^{-1}n^{1/4} K\bigr)\bigr)\subset
c^{-1}n^{1/4} \mathcal{S}_\ve ^{(K)}
\bigr) \build{\la}_{n\to\infty}^{} 1.
\]
Applying the latter convergence with $\ve$ replaced by $\ve/2$, we get
\[
P \bigl(\#\bigl(\bigl\{ Z^n(u) \dvtx u\in\mathcal{T}_n
\bigr\} \cap B\bigl(0, c^{-1}n^{1/4} K\bigr)\bigr) \leq
c^{-d}n^{d/4} \lambda_d\bigl(\mathcal
{S}_\ve^{(K)}\bigr) \bigr)\build{\la}_{n\to\infty}^{}
1.
\]
Write $R_n^{(K)}:= \#(\{ Z^n(u) \dvtx u\in\mathcal{T}_n\}
\cap B(0, c^{-1}n^{1/4} K))$. Since $\lambda_d(\mathcal{S}_\ve^{(K)})
\downarrow\lambda_d(\mathcal{S}\cap B(0,K))$ as
$\ve\downarrow0$, we obtain that, for every $\delta>0$,
\[
P \bigl(n^{-d/4} R_n^{(K)}\leq c^{-d}
\lambda_d\bigl(\mathcal{S}\cap B(0,K)\bigr) +\delta \bigr)\build{
\la}_{n\to\infty}^{} 1,
\]
and, therefore, since the variables $n^{-d/4}R_n^{(K)}$ are uniformly bounded,
\begin{equation}
\label{conv-rangetech1} \lim_{n\to\infty} E \bigl[ \bigl(n^{-d/4}
R_n^{(K)}- c^{-d}\lambda _d\bigl(
\mathcal{S}\cap B(0,K)\bigr) \bigr)^+ \bigr]= 0.
\end{equation}
On the other hand, we claim that we have also
\begin{equation}
\label{conv-rangetech2} \liminf_{n\to\infty} E \bigl[n^{-d/4}
R_n^{(K)} \bigr] \geq c^{-d}E \bigl[
\lambda_d\bigl(\mathcal{S}\cap B(0,K)\bigr) \bigr].
\end{equation}
To see this, observe that
\begin{eqnarray*}
E \bigl[R_n^{(K)} \bigr]& =& \sum
_{a\in\Z^d \cap B(0, c^{-1}n^{1/4} K)} P\bigl(L_n(a)>0\bigr)
\\
& =& \int_{B(0, c^{-1}n^{1/4} K)} \,\mathrm{d}x\, P\bigl(L_n\bigl(
\lfloor x\rfloor \bigr)>0\bigr) + O\bigl(n^{(d-1)/4}\bigr)
\\
& =& n^{d/4} \int_{B(0, c^{-1} K)} \,\mathrm{d}y\, P
\bigl(L_n\bigl(\bigl\lfloor n^{1/4}y\bigr\rfloor\bigr)>0
\bigr) + O\bigl(n^{(d-1)/4}\bigr)
\end{eqnarray*}
as $n\to\infty$. By Theorem~\ref{convLT}, for every $y\neq0$,
\[
\liminf_{n\to\infty} P\bigl(L_n\bigl(\bigl\lfloor
n^{1/4}y\bigr\rfloor\bigr)>0\bigr) \geq P\bigl(l^{cy}>0\bigr)=
P(cy\in\mathcal{S}),
\]
where the equality is derived from Proposition~\ref{hitting-point}.
Fatou's lemma then gives
\[
\liminf_{n\to\infty} n^{-d/4}E \bigl[R_n^{(K)}
\bigr] \geq\int_{B(0,
c^{-1} K)} \,\mathrm{d}y\, P(cy\in\mathcal{S}) =
c^{-d} E \bigl[\lambda_d\bigl(\mathcal{S}\cap B(0,K)\bigr)
\bigr],
\]
which completes the proof of \eqref{conv-rangetech2}.
Using the trivial identity $|x|=2x^+-x$ for every real $x$,
we deduce from \eqref{conv-rangetech1} and \eqref{conv-rangetech2} that
\[
\lim_{n\to\infty} E \bigl[\bigl |n^{-d/4} R_n^{(K)}-
c^{-d}\lambda _d\bigl(\mathcal{S}\cap B(0,K)\bigr) \bigr|
\bigr]=0.
\]
However, we see from \eqref{JaMa} that, for every $\delta>0$, we can
choose $K$
sufficiently large so that we have both $P(\mathcal{S}\subset
B(0,K))\geq1-\delta$
and $P(R_n^{(K)}=R_n)\geq1-\delta$ for every integer $n$. It then follows
from the previous convergence that $n^{-d/4} R_n$
converges in probability to $c^{-d}\lambda_d(\mathcal{S})$ as $n\to
\infty$, and this completes the proof
of Theorem~\ref{conv-range}.
\end{pf}
\section{Branching random walk}\label{sec4}
We will now discuss similar results for branching random walk in
$\Z^d$. We consider a system of particles in $\Z^d$ that evolves
in discrete time
in the following way. At time $n=0$, there are $p$ particles all located
at the origin of $\Z^d$ (we will comment on more general
initial configurations in Section~\ref{bps}). A particle located at
the site
$a\in\Z^d$ at time $n$ gives rise at time $n+1$ to a random number of
offspring
distributed according to $\mu$, and their locations are obtained
by adding to $a$ (independently for each offspring) a spatial displacement
distributed according to $\theta$.
In a more formal way, we consider $p$ independent random spatial trees
\[
\bigl(\mathcal{T}^{(1)},\bigl(Z^{(1)}(u)\bigr)_{u\in\mathcal{T}^{(1)}}
\bigr),\ldots, \bigl(\mathcal{T} ^{(p)},\bigl(Z^{(p)}(u)
\bigr)_{u\in\mathcal{T}^{(p)}} \bigr)
\]
distributed according to $\Pi^*_{\mu,\theta}$, and, for every integer
$n\geq0$,
we consider the random point measure
\[
X^{[p]}_n:= \sum_{j=1}^p
\biggl(\sum_{u\in\mathcal{T}^{(j)},|u|=n} \delta _{Z^{(j)}(u)} \biggr),
\]
which corresponds to the sum of the Dirac point masses at the positions of
all particles alive at time $n$.
The set $\mathcal{V}^{[p]}$ of all sites visited by the particles is
the union over all $n\geq0$ of
the supports of $X^{[p]}_n$, or equivalently
\[
\mathcal{V}^{[p]}= \bigl\{a\in\Z^d \dvtx
a=Z^{(j)}(u)\mbox{ for some }j\in\{1,\ldots,p\}\mbox{ and }u\in
\mathcal{T}^{(j)} \bigr\}.
\]
In a way similar to Theorem~\ref{conv-range}, we are interested in limit
theorems for $\#\mathcal{V}^{[p]}$ when $p\to\infty$. To this end, we
will first state
an analog of the convergence \eqref{JaMa}. For every $j\in\{1,\ldots,p\}
$, let
\[
\varnothing=u^{(j)}_0\prec u^{(j)}_1
\prec\cdots\prec u^{(j)}_{\#
\mathcal{T}^{(j)}-1}
\]
be the vertices
of $\mathcal{T}^{(j)}$ listed in lexicographical order, and set
$H^{(j)}_i=|u^{(j)}_i|$
and $Z^{(j)}_i=Z^{(j)}(u^{(j)}_i)$, for $0\leq i\leq\#\mathcal{T}^{(j)}-1$.
Define the height function
$(H^{[p]}_k,k\geq0)$
of $\mathcal{T}^{(1)},\ldots,\mathcal{T}^{(p)}$ by concatenating the
discrete functions
$(H^{(j)}_i,0\leq i\leq\#\mathcal{T}^{(j)}-1$),
and setting $H^{[p]}_k=0$ for $k\geq\#\mathcal{T}^{(1)}+\cdots+\#
\mathcal{T}^{(p)}$.
Similarly, define the function
$(Z^{[p]}_k,k\geq0)$
by concatenating the discrete functions $(Z^{(j)}_i,0\leq i\leq\#
\mathcal{T}^{(j)}-1$),
and setting $Z^{[p]}_k=0$ for $k\geq\#\mathcal{T}^{(1)}+\cdots+\#
\mathcal{T}^{(p)}$.
Finally, we use linear interpolation to define $H^{[p]}_t$ and
$Z^{[p]}_t$ for every
real $t\geq0$. We can now state our analog of~\eqref{JaMa}.
\begin{proposition}
\label{JaMabis}
We have
\begin{eqnarray*}
&&\biggl( \biggl(\frac{\rho}{2} p^{-1} H^{[p]}_{p^2s},
\sqrt{\frac{\rho
}{2}} p^{-{1}/{2}} Z^{[p]}_{p^2s}
\biggr)_{ s\geq0}, p^{-2} \bigl(\#\mathcal{T}^{(1)}+
\cdots+\#\mathcal{T}^{(p)}\bigr) \biggr) \\
&&\qquad\build{\la}_{p\to\infty}^{\mathrm{(d)}}
\bigl(\bigl(\zeta_{s\wedge\tau}, M_\theta ^{1/2}\wh
W_{s\wedge\tau}\bigr)_{s\geq0},\tau \bigr),
\end{eqnarray*}
where $(W_s)_{s\geq0}$ is a standard Brownian snake, $\tau$ denotes
the first
hitting time of $2/\rho$ by the local time at $0$ of the lifetime
process $(\zeta_s)_{s\geq0}$, and the convergence
of processes holds in the sense of the topology of uniform convergence
on compact sets.
\end{proposition}
The joint convergence of the processes $\frac{\rho}{2} p^{-1}
H^{[p]}_{p^2s}$ and of the random variables $p^{-2}
(\#\mathcal{T}^{(1)}+\cdots+\#\mathcal{T}^{(p)})$
is a consequence of \cite{LG1}, Theorem~1.8, see in
particular~(7) and (9) in \cite{LG1} (note that the local times of the
process $(\zeta_s)_{s\geq0}$ are chosen to be right-continuous
in the space variable, so that our local time at $0$ is twice the local
time that appears in the display (7) in \cite{LG1}). Given the latter
joint convergence, the desired statement
can be obtained by following the arguments of the proof of Theorem~2 in
\cite{JM}. The fact that
we are dealing with unconditioned trees makes things easier than in
\cite{JM} and we omit the details.
We now state an intermediate result, which is of independent interest. Under
the probability measure $\Pi^*_{\mu,\theta}$, we let $\bR:= \{z_u
\dvtx u\in\mathcal{T}\}$
be the set of all points visited by the tree-indexed random walk.
\begin{theorem}
\label{estim-visit}
We have
\[
\lim_{|a|\to\infty}\bigl |M_\theta^{-1/2}a\bigr|^2
\Pi^*_{\mu,\theta}(a\in \bR) = \frac{2(4-d)}{\rho^2}.
\]
\end{theorem}
\begin{pf} We start by proving the upper bound
\[
\limsup_{|a|\to\infty} \bigl|M_\theta^{-1/2}a\bigr|^2
\Pi^*_{\mu,\theta
}(a\in\bR ) \leq\frac{2(4-d)}{\rho^2}.
\]
By an easy compactness argument, it is enough to prove that, if $(a_k)$
is a sequence
in $\Z^d$ such that $|a_k|\to\infty$ and $a_k/|a_k| \to x$, with
$x\in\R
^d$ and $|x|=1$, then
\begin{equation}
\label{estim-v0} \limsup_{k\to\infty} |a_k|^2
\Pi^*_{\mu,\theta}(a_k\in\bR) \leq \frac
{2(4-d)}{\rho^2|M_\theta^{-1/2}x|^2}.
\end{equation}
Set $p_k=|a_k|^2\in\Z_+$ to simplify notation. We note that
\begin{equation}
\label{estim-v1} P \bigl(a_k\in\mathcal{V}^{[p_k]} \bigr) = 1
- \bigl( 1- \Pi^*_{\mu,\theta
}(a_k\in\bR) \bigr)^{p_k}.
\end{equation}
On the other hand, it follows from our definitions that
\[
P \bigl(a_k\in\mathcal{V}^{[p_k]} \bigr) \leq P \biggl(
\exists s\geq0 \dvtx \frac{1}{\sqrt{p_k}} Z^{(p_k)}_s =
\frac{a_k}{|a_k|} \biggr).
\]
We can then use Proposition~\ref{JaMabis} to get
\begin{eqnarray*}
\limsup_{k\to\infty} P \bigl(a_k\in
\mathcal{V}^{[p_k]} \bigr) &\leq&\P_0 \biggl(\exists s\in[0,
\tau]\dvtx M_\theta^{1/2}\wh W_s= \sqrt{
\frac{\rho}{2}} x \biggr)
\\
&=& 1-\exp \biggl(-\frac{2}{\rho} \N_0 \biggl(\sqrt{
\frac{\rho
}{2}}M_\theta ^{-1/2}x\in\mathcal{R} \biggr)
\biggr)
\\
&=& 1-\exp \biggl(- \frac{2(4-d)}{\rho^2|M_\theta^{-1/2}x|^2} \biggr).
\end{eqnarray*}
The second line follows from excursion theory for the Brownian snake,
and the third one
uses the formula for $\N_0(y\in\mathcal{R})$, which has been recalled
already in the proof of
Proposition~\ref{hitting-point}. By combining the bound of the last
display with \eqref{estim-v1},
we get our claim \eqref{estim-v0}, and this completes the proof of the
upper bound.
Let us turn to the proof of the lower bound. As in the proof of the
upper bound, it is enough
to consider a sequence $(a_k)$
in $\Z^d$ such that $|a_k|\to\infty$ and $a_k/|a_k| \to x$, with
$x\in\R
^d$ and $|x|=1$, and then
to verify that
\begin{equation}
\label{estim-v3} \liminf_{k\to\infty} |a_k|^2
\Pi^*_{\mu,\theta}(a_k\in\bR) \geq \frac
{2(4-d)}{\rho^2|M_\theta^{-1/2}x|^2}.
\end{equation}
As previously, we set $p_k=|a_k|^2$. We fix $0<\ve<M$, and we introduce
the function $g_\mu$ defined on $\Z_+$
by $g_\mu(j)=\Pi_\mu(\#\mathcal{T}=j)$. Then
\begin{eqnarray*}
|a_k|^2 \Pi^*_{\mu,\theta}(a_k\in\bR) &
\geq& p_k^3 \int_\ve^M\,
\mathrm{d}r\, \Pi^*_{\mu,\theta}\bigl(a_k\in\bR, \# \mathcal{T} =
\bigl\lfloor p_k^2r\bigr\rfloor\bigr)
\\
&=& p_k^3 \int_\ve^M\,
\mathrm{d}r\, g_\mu \bigl(\bigl\lfloor p_k^2r
\bigr\rfloor \bigr) P \bigl( L_{\lfloor p_k^2 r\rfloor}(a_k)>0 \bigr),
\end{eqnarray*}
where we use the same notation as in Lemma~\ref{contiTlocal}: $L_n(b)$
denotes the number of visits of site $b$ by a random walk indexed by
a tree distributed according to $\Pi_\mu(\cdot\mid\#\mathcal
{T}=n)$. Note
that Theorem~\ref{convLT}
gives, for every $r\in[\ve,M]$,
\[
\liminf_{k\to\infty} P \bigl( L_{\lfloor p_k^2 r\rfloor}(a_k)>0
\bigr) \geq P \bigl(l^{r^{-1/4}z}>0 \bigr),
\]
where we write $z=\sqrt{\frac{\rho}{2}} M_\theta^{-1/2}x$ to
simplify notation.
To complete the argument, we consider for simplicity the aperiodic case
where $\mu$
is not supported on a strict subgroup of $\Z$ [the reader will easily
be able to
extend our method to the general case, using \eqref{Kemp1} instead of
\eqref{Kemp2}]. By
\eqref{Kemp2}, we have for every $r\in[\ve,M]$,
\[
\lim_{k\to\infty} p_k^3 g_\mu
\bigl(\bigl\lfloor p_k^2r\bigr\rfloor \bigr)=
\frac
{1}{\rho\sqrt{2\pi r^3}}.
\]
Using this together with the preceding display, and applying Fatou's
lemma, we obtain
\begin{equation}
\label{estim-v4} \liminf_{k\to\infty} |a_k|^2
\Pi^*_{\mu,\theta}(a_k\in\bR) \geq\int_\ve^M
\frac{\mathrm{d}r}{\rho\sqrt{2\pi r^3}} P \bigl(l^{r^{-1/4}z}>0 \bigr).
\end{equation}
A scaling argument
shows that
\[
P \bigl(l^{r^{-1/4}z}>0 \bigr)= \N^{(1)}_0 \bigl(
\ell^{r^{-1/4}z}>0 \bigr)= \N ^{(r)}_0 \bigl(
\ell^{z}>0 \bigr).
\]
Using this remark and formula \eqref{decoIto}, we see that the
right-hand side of \eqref{estim-v4} can be rewritten as $\frac
{2}{\rho}
\N_0(\mathbf{1}_{\{\ve<\gamma<M\}} \mathbf{1}_{\{\ell^{z}>0\}})$.
By choosing $\ve$ small enough and $M$ large enough, the latter
quantity can be made
arbitrarily close to
\[
\frac{2}{\rho} \N_0\bigl(\ell^{z}>0\bigr)=
\frac{2}{\rho} \biggl(2-\frac{d}{2}\biggr) |z|^{-2}=
\frac{2(4-d)}{\rho^2|M_\theta^{-1/2}x|^2}.
\]
This completes the proof of the lower bound and of Theorem~\ref{estim-visit}.
\end{pf}
Recall our notation $\mathcal{V}^{[p]}$ for the set of all sites
visited by
the branching random walk starting with $p$ initial particles located
at the origin.
\begin{theorem}
\label{rangeBRW}
We have
\[
p^{-d/2} \#\mathcal{V}^{[p]} \build{\la}_{p\to\infty}^{\mathrm{(d)}}
\biggl(\frac{2\sigma}{\rho}\biggr)^{d} \lambda_d \biggl(
\bigcup_{t\geq0} \operatorname{ supp} X_t \biggr),
\]
where $(X_t)_{t\geq0}$ is a $d$-dimensional super-Brownian motion with
branching mechanism $\psi(u)=2u^2$
started from $\delta_0$, and $\operatorname{ supp} X_t$ denotes the topological
support of~$X_t$.
\end{theorem}
\begin{pf}
Via the Skorokhod representation theorem,
we may and will assume that the convergence in Proposition~\ref
{JaMabis} holds a.s., and
we will then prove that the convergence of the theorem holds in probability.
If $\ve>0$ is fixed, the (a.s.) convergence in Proposition~\ref
{JaMabis} implies that, a.s. for all large enough $p$, we have
\[
\sqrt{\frac{\rho}{2}} p^{-1/2}\mathcal{V}^{[p]} \subset
\mathcal {U}_\ve \bigl(\bigl\{M_\theta^{1/2}\wh
W_s\dvtx 0\leq s\leq\tau\bigr\} \bigr),
\]
where, for any compact subset $\mathcal{K}$ of $\R^d$, $\mathcal
{U}_\ve
(\mathcal{K})$ denotes the set of all
points whose distance from $\mathcal{K}$ is strictly less than $\ve$.
It follows that we have a.s.
\[
\limsup_{p\to\infty} p^{-d/2}\#\mathcal{V}^{[p]}
\leq \biggl(\frac
{2}{\rho} \biggr)^{d/2} \lambda_d \bigl(
\mathcal{U}_{2\ve} \bigl(\bigl\{ M_\theta ^{1/2}\wh
W_s \dvtx 0\leq s\leq\tau\bigr\} \bigr) \bigr).
\]
Since $\ve$ was arbitrary, we also get a.s.
\begin{equation}
\label{BRW1} \limsup_{p\to\infty} p^{-d/2}\#
\mathcal{V}^{[p]}\leq \biggl(\frac
{2}{\rho} \biggr)^{d/2}
\lambda_d \bigl( \bigl\{M_\theta^{1/2}\wh
W_s \dvtx 0\leq s\leq\tau \bigr\} \bigr).
\end{equation}
To get an estimate in the reverse direction, we argue in a way very
similar to the proof of Theorem~\ref{conv-range}.
We fix $K>0$, and note that a minor modification of the preceding
arguments also gives a.s.
\begin{eqnarray*}
&&\limsup_{p\to\infty} p^{-d/2}\# \bigl(\mathcal{V}^{[p]}
\cap B\bigl(0,p^{1/2}K\bigr) \bigr)\\
&&\qquad\leq \biggl(\frac{2}{\rho}
\biggr)^{d/2} \lambda _d \bigl( \bigl\{M_\theta^{1/2}
\wh W_s \dvtx 0\leq s\leq\tau \bigr\}\cap B\bigl(0,K'
\bigr) \bigr),
\end{eqnarray*}
where $K'=\sqrt{\frac{\rho}{2}} K$.
Since the variables $p^{-d/2} \#(\mathcal{V}^{[p]}\cap B(0,p^{1/2}K))$
are uniformly bounded, it follows that
\begin{eqnarray}
\label{BRW2}&& \lim_{p\to\infty} E \biggl[ \biggl(p^{-d/2}
\# \bigl(\mathcal {V}^{[p]}\cap B\bigl(0,p^{1/2}K\bigr) \bigr)\nonumber\\
&&\hspace*{25pt}\qquad{}-
\biggl(\frac{2}{\rho} \biggr)^{d/2} \lambda_d \bigl(\bigl
\{M_\theta^{1/2}\wh W_s\dvtx 0\leq s\leq\tau\bigr\}
\cap B\bigl(0,K'\bigr) \bigr) \biggr)^+ \biggr]\\
&&\qquad=0.\nonumber
\end{eqnarray}
On the other hand,
\begin{eqnarray*}
&&p^{-d/2} E \bigl[\#\bigl(\mathcal{V}^{[p]}\cap B
\bigl(0,p^{1/2}K\bigr)\bigr) \bigr]\\
&&\qquad= p^{-d/2} \sum
_{a\in\Z^d\cap B(0,p^{1/2}K)} P\bigl(a\in\mathcal {V}^{[p]}\bigr)
\\
&&\qquad=p^{-d/2} \sum_{a\in\Z^d\cap B(0,p^{1/2}K)} \bigl(1-\bigl(1-
\Pi^*_{\mu,\theta}(a\in\bR)\bigr)^p \bigr)
\\
&&\qquad\build{\la}_{p\to\infty}^{} \int_{B(0,K)}\,
\mathrm{d}x \biggl(1-\exp \biggl(-\frac{2(4-d)}{\rho^2 |M_\theta^{-1/2}x|^2} \biggr) \biggr),
\end{eqnarray*}
where the last line is an easy consequence of Theorem~\ref{estim-visit}.
Furthermore,
\begin{eqnarray*}
&&E \biggl[ \biggl(\frac{2}{\rho} \biggr)^{d/2}
\lambda_d \bigl(\bigl\{M_\theta ^{1/2}\wh
W_s\dvtx 0\leq s\leq\tau\bigr\}\cap B\bigl(0,K'\bigr)
\bigr) \biggr]
\\
&&\qquad = \biggl(\frac{2}{\rho} \biggr)^{d/2}\int_{B(0,K')}\,
\mathrm{d}y \biggl(1-\exp \biggl(-\frac{2}{\rho} \N_0
\bigl(M_\theta^{-1/2}y\in\mathcal{R}\bigr) \biggr) \biggr)
\\
&&\qquad = \biggl(\frac{2}{\rho} \biggr)^{d/2}\int_{B(0,K')}\,
\mathrm{d}y \biggl(1-\exp \biggl(-\frac{4-d}{\rho|M_\theta^{-1/2}y|^2} \biggr) \biggr)
\\
&&\qquad =\int_{B(0,K)} \,\mathrm{d}x \biggl(1-\exp \biggl(-
\frac{2(4-d)}{\rho
^2 |M_\theta^{-1/2}x|^2} \biggr) \biggr).
\end{eqnarray*}
From the last two displays, we get
\begin{eqnarray}
\label{BRW3} &&\lim_{p\to\infty} E \bigl[p^{-d/2} \#\bigl(
\mathcal{V}^{[p]}\cap B\bigl(0,p^{1/2}K\bigr)\bigr) \bigr]
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = E
\biggl[ \biggl(\frac{2}{\rho} \biggr)^{d/2} \lambda_d
\bigl(\bigl\{M_\theta ^{1/2}\wh W_s \dvtx 0\leq s
\leq\tau\bigr\}\cap B\bigl(0,K'\bigr) \bigr) \biggr].
\end{eqnarray}
From \eqref{BRW2} and \eqref{BRW3}, we have
\begin{eqnarray*}
&&\lim_{p\to\infty} E \biggl[\biggl |p^{-d/2} \#\bigl(
\mathcal{V}^{[p]}\cap B\bigl(0,p^{1/2}K\bigr)\bigr)\\
&&\hspace*{20pt}\qquad{}- \biggl(
\frac{2}{\rho} \biggr)^{d/2} \lambda_d \bigl(\bigl
\{M_\theta^{1/2}\wh W_s\dvtx 0\leq s\leq\tau\bigr\}
\cap B\bigl(0,K'\bigr) \bigr) \biggr| \biggr]\\
&&\qquad=0.
\end{eqnarray*}
Since, by choosing $K$ large enough, $P(\mathcal{V}^{[p]}\subset
B(0,p^{1/2}K))$ can be made arbitrarily close to $1$,
uniformly in $p$, we have proved that
\begin{eqnarray}
\label{BRW4} p^{-d/2} \#\mathcal{V}^{[p]} &\build{
\la}_{p\to\infty}^{\mathrm{(P)}}& \biggl(\frac{2}{\rho}
\biggr)^{d/2} \lambda_d \bigl(\bigl\{ M_\theta^{1/2}
\wh W_s \dvtx 0\leq s\leq\tau\bigr\} \bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&= &\biggl(
\frac{2\sigma^2}{\rho} \biggr)^{d/2} \lambda_d \bigl( \{\wh
W_s \dvtx 0\leq s\leq\tau \} \bigr).
\end{eqnarray}
The relations between the Brownian snake and super-Brownian motion
\cite{Zurich}, Theorem IV.4, show that the quantity $\lambda_d(\{\wh W_s
\dvtx 0\leq s\leq\tau\})$
is the Lebesgue measure of the range of a super-Brownian motion (with
branching mechanism $2u^2$) started
from $(2/\rho)\delta_0$.
Finally, simple scaling arguments show that the limit can be expressed
in the form given in the theorem.
\end{pf}
\section{Open problems and questions}\label{sec5}
\subsection{The probability of visiting a distant point}\label{sec5.1}
Theorem~\ref{estim-visit} gives the asymptotic behavior of the
probability that a branching
random walk starting with a single particle at the origin visits a
distant point $a\in\Z^{d}$. It would
be of interest to have a similar result in dimension $d\geq4$,
assuming that $\theta$ is centered and has sufficiently high moments.
When $d\geq5$, a
simple calculation of the first and second moments of the number of
visits of $a$ (see, e.g., the remarks following Proposition~5 in \cite
{LGL}) gives the bounds
\[
C_1|a|^{2-d}\leq\Pi^*_{\mu,\theta}(a\in\bR)\leq
C_2 |a|^{2-d}
\]
with positive constants $C_1$ and $C_2$ depending on $d,\mu$ and
$\theta
$. When $d=4$,
one expects that
\[
\Pi^*_{\mu,\theta}(a\in\bR) \approx \frac{C}{|a|^2 \log|a|}.
\]
Calculations of moments give $\Pi^*_{\mu,\theta}(a\in\bR)\geq
c_1(|a|^2\log|a|)^{-1}$, but
proving the reverse bound $\Pi^*_{\mu,\theta}(a\in\bR)\leq
c_2(|a|^2\log|a|)^{-1}$
with some constant $c_2$ seems a nontrivial problem. This problem, in
the particular case of the
geometric offspring distribution, and some related questions are
discussed in Section~3.2 of \cite{BC}.
\subsection{The range in dimension four}\label{sec5.2}
With our previous notation $R_n$ for the range of a random walk indexed by
a random tree distributed according to $\Pi_\mu(\cdot \mid \#
\mathcal{T}
=n)$, Theorem~14 in \cite{LGL} states
that in dimension $d=4$,
\[
\frac{\log n}{n} R_n \build{\la}_{n\to\infty}^{L^2} 8
\pi^2\sigma^4,
\]
provided $\mu$ is the geometric distribution with parameter $1/2$, and
$\theta$ is symmetric and has exponential moments. It would be of
interest to extend this result
to more general offspring distributions. It seems difficult to adapt
the methods of
\cite{LGL} to a more general case, so new arguments would be needed.
In particular, finding the exact asymptotics of $\Pi^*_{\mu,\theta
}(a\in
\bR)$ (see
the previous subsection) in dimension $d=4$ would certainly be helpful.
\subsection{Branching random walk with a general initial
configuration}\label{sec5.3}
\label{bps}
One may ask whether a result such as Theorem~\ref{rangeBRW} remains
valid for
more general initial configurations of the branching particle system:
Compare with Propositions 20 and 21
in \cite{LGL}, which deal with the case $d\geq4$ and require no
assumption on the
initial configurations. In the present setting, Theorem~\ref{rangeBRW}
remains valid,
for instance, if we assume that the initial positions of the particles
stay within a bounded set
independently of $p$. On the other hand, one might consider the case
where we only assume that
the image of $p^{-1}X^{[p]}_0$ under the mapping $a\mapsto p^{-1/2}a$
converges weakly to a finite
measure $\xi$ on $\R^d$. This condition ensures the convergence of the
(rescaled) measure-valued processes
$X^{[p]}$ to a super-Brownian motion $Y$ with initial value $Y_0=\xi$,
and it is
natural to expect that we have, with a suitable constant~$C$,
\begin{equation}
\label{convbps} p^{-d/2} \#\mathcal{V}^{[p]} \build{
\la}_{p\to\infty}^{\mathrm{(d)}} C \lambda_d \biggl(\bigcup
_{t \geq0} \operatorname{ supp} Y_t \biggr).
\end{equation}
For trivial reasons, \eqref{convbps} will not hold in dimension $d=1$.
Indeed, for $\frac{1}{2}<\alpha<1$, we may let the initial
configuration consist of $p-\lfloor p^\alpha\rfloor$
particles uniformly spread over $\{1,2,\ldots,\sqrt{p}\}$ and
$\lfloor
p^\alpha\rfloor$ other particles located
at distinct points outside $\{1,2,\ldots,\sqrt{p}\}$. Then the
preceding assumptions hold ($\xi$ is the
Lebesgue measure on $[0,1]$), but \eqref{convbps} obviously fails since
$\#\mathcal{V}^{[p]}\geq\lfloor p^\alpha\rfloor$.
In dimension $2$, \eqref{convbps} fails again, for more subtle reasons:
One can construct examples where the descendants of certain initial
particles that
play no role in the convergence of the initial configurations
contribute to the asymptotics of $\#\mathcal{V}^{[p]}$
in a significant manner. Still, it seems likely that some version of
\eqref{convbps} holds under more stringent conditions on
the initial configurations [in dimension $3$ at least, the union in the
right-hand side of \eqref{convbps} should exclude $t=0$,
as can be seen from simple examples]. | 0.002973 |
TITLE: Why $y=e^x$ is not an algebraic curve?
QUESTION [6 upvotes]: Why $y=e^x$ is not an algebraic curve over $\mathbb R$? I can say that is not a algebraic curve over $\mathbb C$ because $e^x$ is a periodic function, but what about $\mathbb R$?
EDIT:
I don't want to use trascendence of $e$. Or, I can ask this question for $y=2^x$.
UPDATE:
Can we just say that algebraic curve over $K$ is also alegraic over any extension of $K$?
REPLY [6 votes]: Suppose $x$ and $e^x$ satisfy a polynomial equation $f(x,e^x)=0$ where $f(x,y)$ has minimal degree in $y$.
Write $f(x,y)=p(x)y^n+g(x,y)$, where $p(x)y^n$ is the leading term in $y$.
Differentiate $p(x)e^{nx}+g(x,e^x)=0$ and get $np(x)e^{nx}+p'(x)e^{nx}+h(x,e^x)=0$, for some $h$.
Subtract $n$ times the first equation from the second and get $p'(x)e^{nx}+h(x,e^x)-ng(x,e^x)=0$.
This equation has a leading term $p'(x)e^{nx}$ of smaller degree in $x$.
We can repeat this process until we remove the term in $y^n$, which is a contradiction.
The same proof works for $b^x$ with some $\log b$ factors that can be absorbed. | 0.033993 |
Or, at the very least, he thinks he is alone in a crowded arena.
I would not suggest watching this video around dinner time....
Or, at the very least, he thinks he is alone in a crowded arena.
I would not suggest watching this video around dinner time....
"Booing on opening day is like telling grandma her house smells like old lady."--WOY
Gross.
Go Gators!
Of all the guys to share a name with!
Hello, Im Tom Asbury.
No, Not the nose picking basketball coach. The other one.
Bob MacKenzie: My brother and I used to say that drownin' in beer was like heaven, eh? Now he's not here, and I've got two soakers... this isn't heaven, this sucks!
Tom!
Fraaaaance!
GREAT BENGALS FORUM!
Everybody's done it (don't even act like you haven't).
This dude just got caught on film.
For every action there is an equal and opposite criticism.
whisper words of wisdom
I wrote a book!
My AmazonShort
My other AmazonShort
Actually, I can honestly tell you that I have never picked my nose and eaten it, at least at any point over the age of 6.
To do that at his age, is pretty bad.
There's always the KU nose pick from a few years ago. I'm sure pahster has seen this one. | 0.001055 |
TITLE: Show that for any arc length parameterized curve there is a vector $ω(s)$ that satisfies the following equations
QUESTION [3 upvotes]: I'm trying to solve the following question
Show that for any arc length parameterized curve there is a vector
$ω(s)$ that satisfies
$$T'(s) = ω(s) × T (s)$$ $$N'(s) = ω(s) × N(s)$$ $$B'(s) = ω(s) ×
> B(s)$$
HINT: Consider $ω(s) = a(s)T (s) +b(s)N(s) +c(s)B(s)$ (where $T$, $N$,
$B$ are the unit tangent, normal and binormal vectors) and find the
coefficients $a$, $b$, $c$ that work.
I managed to get
$$a(s) = T(s) \cdot ω(s)$$
$$b(s) = N(s) \cdot ω(s)$$
$$c(s) = B(s) \cdot ω(s)$$
But I don't know how to proceed from this. What direction should I be going in.
REPLY [1 votes]: Let me write $\omega(s) = t(s)T(s) + n(s)N(s) + b(s)B(s)$ where $t,n,b$ are scalar functions. Using the Frenet-Serret formulas, the first equation $T'(s) = \omega(s) \times T(s)$ translates into
$$ k(s)N(s) = n(s) (N(s) \times T(s)) + b(s)(B(s) \times T(s)) = -n(s)B(s) + b(s)N(s) $$
which implies that $b(s) = k(s)$ and $n(s) = 0$. The second equation $N'(s) = \omega(s) \times N(s)$ translates into
$$ -k(s)T(s) + \tau(s)B(s) = t(s) (T(s) \times N(s)) + b(s) (B(s) \times N(s)) = t(s)B(s) - b(s)T(s) $$
which implies that $t(s) = \tau(s)$ (and again, that $b(s) = k(s)$). Thus, $\omega$ should be
$$ \omega(s) = \tau(s)T(s) + k(s)B(s). $$
I leave it to you to check that the third equation is also satisfied. | 0.102224 |
TITLE: Non-conservative system and velocity dependent potentials
QUESTION [3 upvotes]: I'm studying Lagrangian mechanics, but I'm a little bit upset because when dealing with Lagrange's equations, we mostly consider conservative systems. If the system is non conservative they are very brief by saying that 'sometimes' there exist a velocity dependent potential $U(q,\dot{q},t)$ such that the generalized force $Q_j$ of the standard system can be written in terms of this potential.
$$Q_j =\frac{d}{dt}\left(\frac{\partial U}{\partial \dot{q}_j}\right)-\frac{\partial U}{\partial q_j}$$
They give as example, charged particles in a static EM field.
But my question is, if we can find this velocity dependent potential for any generalized force?
If not, we can't use Lagrangian mechanics?
REPLY [3 votes]: No, (generalized) velocity dependent potentials $U(q,\dot{q},t)$ do not exist for all (generalized) forces $Q_j$. See e.g. this Phys.SE post.
Even if no variational formulation exists, one may still consider Lagrange equations, cf. e.g. this Phys.SE post. | 0.084916 |
This item from New Hampshire Antique Co-op is now SOLD.
More from New Hampshire Antique Co-op
Previous Next
Here are similar items currently available on Ruby Lane for your consideration
17th / 18th Century Leather Polychrome Four Panel Floor Screen
New Hampshire Antique Co-op
$950 USD
19th c Continental Foliate Painted Leather Three Panel Dressing Screen
New Hampshire Antique Co-op
$895 USD
19th Century Continental Polychrome Writing Desk in Black Lacquer
New Hampshire Antique Co-op
$1,980 USD
19th Century Victorian Three Part Decoupage Dressing Screen
New Hampshire Antique Co-op
$2,600 USD
Polychrome Hand-Painted Continental Credenza, circa 1910
Antique Revival, NY
$600 USD SALE OFFER
Triple 1920's Japanese Hand Carved Antique 1920's Dressing Mirror or Screen
Harp Gallery Antique Furniture
$1,275 USD
Antique Neoclassical Chandelier with Original Polychrome
Preservation Station
$475 USD
Exquisite French Cream Gold Gilt Dressing Screen 59" H
Aardvark Antiques & Fine Interiors
$395 USD
19th Century Continental Walnut Refectory or Trestle Table with Geometric Inlay
New Hampshire Antique Co-op
$3,650 USD
19th c Continental Carved Arm Chair with Burled Panels
New Hampshire Antique Co-op
$795 USD
19th c Continental Arm Chair with Carved Crest
New Hampshire Antique Co-op
$895 USD
Pair of English Adam Style Polychrome Upholstered Armchairs
New Hampshire Antique Co-op
$2,400 USD
Continental Polychromed Dressing Screen
Item ID:4349
Continental dressing screen featuring polychromed 19th century leather panels decorated with a parrot, flowering trees and vines all mounted on an ebonized, hinged and paneled frame. Good overall condition, with restored remounted panels showing some corner losses, shrinkage cracks, and craquelure, as well as paint restoration on verso. Dimensions: 75" h x 81" w x | 0.000897 |
The producers of the BRIT Awards have issued an apology to Adele, whose acceptance speech for the Album of the Year award was cut short to make time for a closing performance by Blur. Adele, frustrated in the moment, raised her middle finger to the cameras as she was forced off the stage. The singer later clarified that the middle finger was directed at "the suits at the BRIT Awards," and not her legion of fans.
"We regret this happened and we send our deepest apologies to Adele that her big moment was cut short this evening due to the live show over-running," said a statement from BRIT Awards organisers. "We don't want this to undermine her incredible achievement in winning our night's biggest award. It tops off what's been an incredible year for her."
You can watch a clip of Adele's brief acceptance speech below. | 0.002098 |
Our clothing serves as our branding. To most people, buying presentable and fashionable clothing can be seen as a form of investment as it can help boost your personality with our daily shenanigans. Shabby, dull clothes do not inspire our confidence and may seem off-putting to the people we deal with everyday. In relation to this, clean and well-maintained clothing helps boost up our self-image and tend to create a positive reputation among people.
Just like us, our clothes also go through things and need utmost care. Since our clothings envelopes our personality and our image, it is important to ensure that we keep it looking fresh and clean. Doing our laundry is one part of maintaining our clothes, however, doing it at home can be tiresome. Even though we can use washing machines, it just feels like a burden that adds up to the household chores we need to accomplish, and it consumes time that we can just optimize for personal activities, such as bonding with our families. Sometimes, it can also leave our clothes damaged as we just tend to throw everything in and forget how to properly handle them. Dropping off dirty laundry for dry cleaning is an option worth considering. It may also extend the lives of our garments. Here’s why:
Wet laundry is an abrasive process that’s likely to damage clothes in the long run. You can ruin your clothes if you are not familiar with how to use the right water temperature, detergent and laundry time for specific fabrics. Today’s dry cleaners use less abrasive and organic products to clean your clothes in a “gentler” yet equally effective way.
We sometimes damage the fabric of our clothes while attempting to remove stain through excessive brushing or rubbing. Stubborn stains just won’t go away no matter how hard we try. Aside from this, some clothes need odor removal. In most cases, our home remedies just won’t work. Dry cleaners use special stain and odor removal processes to ensure that your clothes are properly cleaned while upholding its quality.
A good number of professional dry cleaners offer other services like alterations, repairs, whitening of yellowing fabrics, and clothes restoration. You can save some bucks from buying new clothes by having your used one undergo a magical transformation in a professional dry cleaning shop. Professional dry cleaners have a way of giving your old, discarded clothes a new life by restoring it and fitting it to your standards that you can be more comfortable to use.
Your tasks and chores pile up each day. Dropping off your dirty clothes for dry cleaning services saves you time and brings so much convenience. No laundry means fewer headaches. Nothing beats the convenience of picking up a washed, dry and folded laundry. Now you can have some extra me-time!
Washing items like curtains, table clothes, comforter and blankets takes so much time and effort. Your laundry machine tub can only accommodate limited items. Dry cleaning services can do heavy and bulk washing in a jiffy. You can save yourself from the hassle of folding, wringing, and drying your atrociously heavy items while you let the “experts” do the job for you.
Dry cleaning services are within reach, affordable, convenient and bring a lot of added value. It’s about time to give yourself some rest from home laundry or use the time you saved for other tasks. | 0.996187 |
More Ways to Help
1) SHOP ON AMAZON
Do you shop on Amazon? Did you know every time you purchase something on Amazon.com, you can be helping an AFLAR dog? From now on, sign into Amazon at SMILE.AMAZON.COM. On your first visit to AmazonSmile, just select All for Love Animal Rescue to receive donations from eligible purchases before you begin shopping (Other charities come up above it when you enter the name. Be sure to choose All For Love Animal Rescue Inc. in Ojai, CA.) Amazon remembers your selection, and then every eligible purchase you make on AmazonSmile will result in a donation to AFLAR!
Here's the link to sign up:
2) AFLAR Amazon Wishlist
And while you're shopping at Amazon, here's AFLAR’s Amazon Wishlist link which has very important things that we need to help keep our dogs happy and healthy, while they are in temporary boarding or foster, awaiting their forever homes. Click logo below:
3) TRY NuVet SUPPLEMENTS FOR YOUR PETS
AFLAR has partnered with NuVet Labs to help pets stay healthy, and in return, NuVet Labs will donate a part of your purchase, to AFLAR. When ordering a NuVet product, use the AFLAR code 66233, and AFLAR will get a commission. We've had our rescue dogs on NuVet supplements for many months and they're amazing!
To keep your own pets happy and healthy, just click the link below and be sure that the 66233 code shows up when making your purchase:
4) WALK FOR A DOG
Do you walk for exercise? Do you walk your dog daily? Then check this out! WOOFTRAX smile emoticon.
What is it? Walk for a Dog is a smartphone application that supports your local animal organization every time you walk your dog. How does it work? After you download the app onto your smartphone, take the phone with you when you walk your dog(s). Simply press the “Start Walking for All for Love Animal Rescue” button and the app will keep track of your walk. When your walk is stopped it is credited to your selected shelter or rescue. The more people walking for our organization, the more they can donate, so please spread the word! A healthy and happy way for you and your dog to help our rescue. Just click on the link below:
5) HELP US PLAN FUNDRAISERS
Some of you have volunteered to be on the fundraising committee, or do specific fundraisers or help with grant writing. We need more volunteers in all of these areas!
We are also asking for any one who can organize a fundraiser or has a fundraising opportunity, to please let us know, so that we can get it underway! Click the link below and fill out the Volunteer Form, to share your ideas:
6) AT YOUR WORK PLACE
Some companies have a "community give-back program," and want their employees to volunteer 40 hours throughout the year. AFLAR is a nonprofit 501(c)(3) organization, so we qualify as a charity for this type of program. Please consider volunteering for AFLAR during the year, which will give money to both you and AFLAR! Click the link below and fill out the Volunteer Form:
7) MAKE A DONATION TO AFLAR!
To make a one-time or monthly donation to AFLAR, click the donate button below to go to our Paypal page. Please select "sending money to family or friends," so no fees are taken out. Here's the link:
If you prefer, you can mail a check to the address below. Please make your check out to All For Love Animal Rescue.
All For Love Animal Rescue
c/o Kelly Stoner
69 Mountain Shadow Rd.
Wofford, CA 93285
We cannot continue to rescue dogs and care for the ones we have waiting for forever homes, without financial support. Please help spread the word, and help us save more lives. We appreciate your support so much!!! | 0.751642 |
Latest Diablo 3 News
DiabloWiki Updates
Support the site! Become a Diablo: IncGamers PAL - Remove ads and more!
America Softcore Anybody interested in teaming up to do Uber runs?
Discussion in 'Diablo 3 General Discussion' started by wutevaplaya, Oct 21, 2012. | Replies: 20 | Views: 2145
Page 1 of 2
Page 1 of 2 | 0.001026 |
\begin{document}
\title{On non-regular $g$-measures}
\author{Sandro Gallo}
\author{Fr\'ed\'eric Paccaut}
\begin{abstract}
We prove that $g$-functions whose set of discontinuity points has strictly negative topological pressure and which satisfy an assumption that is weaker than non-nullness, have at least one stationary $g$-measure. We also obtain uniqueness by adding conditions on the set of continuity points.
\end{abstract}
\maketitle\blfootnote{{\it MSC 2010}: 60J05, 37E05.}\blfootnote{{\it Keywords}: $g$-measure, topological pressure, context tree}\blfootnote{Both authors were partially supported by CAPES grant AUXPE-PAE-598/2011}
\section{Introduction}
The $g$-measures on $A^{\mathbb{Z}}$ ($A$ discrete) are the measures for which the conditional probability of one state at any time, given the past, is specified by a function $g$, called $g$-function. In this paper, $g$-measures will always refer to \emph{stationary} measures. The main question we answer in the present paper is the following: what conditions on $g$-functions $g$ will ensure the existence of a (stationary) $g$-measure?
It is well-known that the continuity of $g$ implies existence if the alphabet $A$ is finite. Here we extend this result to discontinuous $g$-functions by proving that existence holds whenever the topological pressure of the set of discontinuities of $g$ is strictly negative, even when $g$ is not necessarily non-null.
\vspace{0.3cm}
The name $g$-measure was introduced by \cite{Keane} in Ergodic Theory to refer to an extension of the Markov measures, in the sense that the function $g$ may depend on a unbounded part of the past. In the literature of stochastic processes, these objects already existed under the names ``Cha\^ines \`a liaison compl\`ete'' or ``chains of infinite order'', respectively coined by \cite{doeblin/fortet/1937} and \cite{harris/1955}. The function $g$ is also called set of transition probabilities, or probability kernel. Given a function $g$ (or \emph{probability kernel}), the most basic questions are the following: does it specify a $g$-measure (or stationary \emph{stochastic process})? If yes, is it unique?
To answer these questions, the literature mainly focussed on the continuity assumption for $g$ (see \cite{onicescu/mihoc/1935, doeblin/fortet/1937, harris/1955, Keane, Ledrappier, JO, fernandez/maillard/2005} and many other). This assumption gives ``for free'' the existence of the $g$-measure. For this reason, uniqueness and the study of the statistical properties of the resulting unique measure have been the centre of the attention from the beginning of the literature.
Only recently, \cite{gallo/2009, CCPP, desantis/piccioni/2012} studied $g$-measures with functions $g$ that were not necessarily continuous. However, no general criteria has been given regarding the existence issue, either because these works are example-based, or because the obtained conditions are restrictive, implying both existence and uniqueness. This rises a natural motivation for finding a general criteria for the existence of $g$-measures.
\vspace{0.3cm}
A second motivation is the analogy with one-dimensional Gibbs measures. In statistical mechanics, the function specifying the conditional probabilities with respect to both \emph{past and future} is called a specification. The theorem of \cite{kozlov/1974} states that Gibbs measures have continuous and strictly positive specifications.
Stationary measures having support on the set of points where the specification is continuous are called almost-Gibbsian (\cite{maes/redig/vanMoffaert/leuven/1999}). Clearly, Gibbsian measures are almost-Gibbsian. \cite{FGM} proved that regular $g$-measures (associated to continuous and strictly positive function $g$) might not be Gibbs measures, still they are always almost-Gibbsian. Thus, although the nomenclature of Gibbsianity cannot be imported directly to the case of $g$-measures, it is tempting to try to find ``almost-regular" $g$-measures.
\vspace{0.3cm}
Going further in the analogy between $g$-measures and (almost-)Gibbs measures, a natural idea is to look for a $g$-measure having support inside the set of continuity points of $g$. Of course, it is not an easy task to control the support of a measure before knowing its existence. An idea is then to put a topological assumption on the set of discontinuity points of $g$, ensuring that this set will have $\mu$-measure $0$, whenever the $g$-measure $\mu$ exists. In the vein of \cite{BPS}, this is done in the present paper by using the topological pressure of the set of discontinuity points of $g$.
Theorem \ref{existence} states that there exist $g$-measures when the function $g$ has a set of discontinuity points with negative topological pressure, even without assuming non-nullness. As a corollary (Corollary \ref{coro}), a simple condition on the set of discontinuity points of a function $g$ is given, which may appear more intuitive to the reader not familiar with the concept of topological pressure. The set of discontinuity points of $g$ can be seen as a tree where each branch is $A^{-\mathbb{N}}$. The new condition is that the upper exponential growth rate of this tree is smaller than a constant that depends on $\inf_X g$ (or, if non-nullness is not assumed, on some parameter explicitly computable on $g$). Our last result (Theorem \ref{theo:ExistsUniqueMixing}), based on the work of \cite{JO}, gives explicit sufficient conditions on the set of continuity points of discontinuous kernels $g$ (satisfying our conditions of existence) ensuring uniqueness.
\section{Notations, definitions and main results}
Let $(A,\A)$ be a measurable space, where $A$ is a finite set (the alphabet) and $\A$ is the associated discrete $\sigma$-algebra. We will denote by $|A|$ the cardinal of $A$. Define $X=A^{-\NN}$ (we use the convention that $\mathbb{N}=\{0,1,2,\ldots\}$), endowed with the product of discrete topologies and with the $\sigma$-algebra $\F$ generated by the coordinate applications. For any $x\in X$, we will use the notation $x=(x_{-i})_{i\in \NN}=x_{-\infty}^{0}=\ldots x_{-1}x_{0}$. For any $x\in X$ and $z\in X$, we denote, for any $k\ge0$, $zx_{-k}^{0}=\ldots z_{-2}z_{-1}z_{0}x_{-k}\ldots x_{0}$, the concatenation between $x_{-k}^{0}$ and $z$.
In other words, $zx_{-k}^{0}$ denotes a new sequence $y\in X$ defined by $y_{i}=z_{i+k+1}$ for any $i\leq -k-1$ and $y_{i}=x_{i}$ for any $-k\leq i\leq 0$. Finally, the length of any finite string $v$ of elements of $A$, that is, the number of letters composing the string $v$, will be written $|v|$.\\
Define the shift mapping $T$ as follows :
$$
\begin{array}{cccc}
T: & X & \rightarrow & X \\
\ & (x_n)_{n\le 0} & \mapsto & (x_{n-1})_{n\le 0}.
\end{array}
$$
The mapping $T$ is continuous and has $|A|$ continuous branches called $T_a^{-1}, a\in A$.
Denote by $\M$ the set of Borelian probability measures on $X$, by $\B$ the set of bounded functions and by $\C$ the set of continuous functions.
The characteristic functions will be written $\1$.
A $g$-function is a $\F$-measurable function $g:X\to [0,1]$ such that
\begin{equation}\label{g-function}
\forall x\in X,\,\,\, \sum_{y:T(y)=x}g(y)=\sum_{a\in A}g(xa)=1.
\end{equation}
\begin{Ex}\emph{
Matrix transitions of $k$-steps Markov chains, $k\ge1$, are the simplest example of $g$-functions. They satisfy $g(xa)=g(ya)$ whenever $x_{-k+1}^{0}=y_{-k+1}^{0}$, $\forall a$. }
\end{Ex}
\begin{Ex}\emph{\label{comb} Let us introduce one of the simplest examples of non-Markovian $g$-function on $A=\{0,1\}$. Let $(q_{n})_{n\in \NN\cup\{\infty\}}$ be a sequence of $[0,1]$-valued real numbers. Set $\tilde{g}(x1)=q_{\ell(x)}$ where $\ell(x):=\inf\{k\geq 0:x_{-k}=1\}$ for any $x\in A^{-\mathbb{N}}$ (with the convention that $\ell(x)=\infty$ whenever $x_{-i}=0$ for all $i\le 0$). Notice that the value of $\tilde{g}(x)$ depends on the distance to occurrence of a symbol $1$ in the sequence $\ldots x_{-1}x_{0}$. Therefore, for any $k\geq 1$ the property that $g(xa)=g(ya)$ whenever $x_{-k+1}^{0}=y_{-k+1}^{0}=0_{-k+1}^{0}$ does not hold. This is not the transition matrix of a Markov chain. We will come back to this motivating example several times throughout this paper.}
\end{Ex}
\begin{Def}
An $A$-valued stochastic processes $(\xi_n)$ defined on a probability space $(\Omega,\mathcal{G},\PP)$ is specified by a given $g$-function $g$ if
$$
{\PP}(\xi_0=a|(\xi_k)_{k<0})=g(\ldots \xi_{-2}\xi_{-1}a)\ \ \PP\ \mbox{almost surely}.
$$
The distribution of a \emph{stationary} process $(\xi_n)$ of this form is called a $g$-measure.
\end{Def}
Here is a more ergodic oriented, equivalent definition:
\begin{Def}Let $g$ be a $g$-function. A probability measure $\mu\in\M$ is called a $g$-measure if $\mu$ is $T$-invariant and for $\mu$ almost every $x\in X$ and for every $a\in A$:
$$
{\EE}_{\mu}(\1_{\{x_0=a\}}|\F_1)(x)=g(T(x)a).
$$
with $\F_1=T^{-1}\F$.
\end{Def}
Given a $g$-function, the existence of a corresponding $g$-measure is not always guaranteed. For instance, coming back to example \ref{comb}, \cite{CCPP} proved that if $\prod_{k\ge1}\sum_{i=0}^{k-1}(1-q_{i})=\infty$ and $q_{\infty}>0$, then there does not exist any $g$-measures for $\tilde{g}$. Another simple example is given by \cite{Keane} on the torus. In general, a sufficient condition for the existence of a $g$-measure corresponding to some fixed $g$-function is to assume that $g$ is continuous in every point (see \cite{Keane} for instance). Continuity here is understood with respect to the discrete topology, that is, $g$ is continuous at the point $x$ if for any $z$, we have
\[
g(zx_{-k}^{0})\stackrel{k\rightarrow\infty}{\longrightarrow} g(x).
\]
Continuity is nevertheless not necessary for existence, as shown, one more time, by the $g$-function $\tilde{g}$ of example \ref{comb}. For instance, let $q_{i}=\epsilon<1/2$ when $i$ is odd and $q_{i}=1-\epsilon$ when $i$ is even, and put $q_{\infty}>0$. Observe that in this case $\tilde{g}$ has a discontinuity at $0_{-\infty}^{0}$, since $\tilde{g}(1_{-\infty}^{0}0_{-k}^{0})$ oscillates between $\epsilon$ and $1-\epsilon$ when $k$ increases. But it is well-known that $\tilde{g}$ has a $g$-measure (see \cite{CCPP} or \cite{gallo/2009} for instance). \\
The preceding observations yield to our first issue, which is to give a general condition on the set of discontinuities of $g$, under which there still exists a $g$-measure. This is the content of Theorem \ref{existence} which we will state after introducing some further definitions. \\
The cylinders are defined in the usual way by
$$
C_n(x)=\{w\in X, w_{-n+1}^0=x_{-n+1}^0\}\,,\,\,\forall x\in X,
$$
and the set of $n$-cylinders is
$$
\C_n=\{C_n(x), x\in X\}.
$$
Define, for $x\in X$ and $n\in{\NN}$, $n\ge1$
$$
g_n(x)=\prod_{i=0}^{n-1}g(T^i(x)).
$$
The topological pressure of a measurable set $S\subset X$ is defined by
$$
P_g(S)=\limsup_{n\to+\infty}\frac1n\log\sum_{{B\in\C_n}\atop{B\cap S\neq\emptyset}}\sup_Bg_n.
$$
Let $\D$ be the set of discontinuity points of $g$. Let $\C_n(\D)$ be the union of $n$-cylinders that intersect $\D$ :
$$
\C_n(\D)=\bigcup_{x\in\D}C_n(x).
$$
For $n\in\NN$, set $\E_n=T^{-1}T\C_{n+1}(\D)$ (notice that $\E_0=X$ and $\E_{n+1}\subset\E_n$). $\E_n$ is the set of points that write $yx_{-n}^{-1}a$, with $a\in A$, $x_{-\infty}^0\in\D$ and $y\in X$.
\begin{Theo}\label{existence}
Let $g$ be a $g$-function with discontinuity set $\D$. Assume
$$
\begin{array}{ll}
{\bf (H1)} & {\exists N\in\NN,\,\exists\varepsilon>0,\,\inf_{\E_N}g=\varepsilon},\\
{\bf (H2)} & P_g(\D)<0,
\end{array}
$$
{then there exists at least a $g$-measure and its support is contained in $X\setminus\mathcal{D}$}.
\end{Theo}
\begin{Rem}Hypothesis {\bf (H1)} is strictly weaker than the ``strong non-nullness'' assumption $\inf_{X}g>0$, since the later corresponds to {\bf (H1)} being satisfied for $N=0$ and Example \ref{ex:3lettres} below satisfies {\bf (H1)} and is not strongly non-null.
\end{Rem}
\begin{Rem}\label{discontinuite_fini}
\emph{Notice that {\bf (H2)} is fulfilled for instance when
$\D$ is a finite set and $\inf_X g>0$ (i.e. {\bf (H1)} is fulfilled with $N=0$). This is, in particular, the case of our simplest Example \ref{comb} when the $q_{i}$'s are oscillating between $\varepsilon$ and $1-\varepsilon$. {Notice also that {\bf (H2)} is fulfilled as well as soon as $\D$ is finite, {\bf (H1)} is fulfilled with $N>1$ and $T\D\subset \D$. This will be an easy consequence of Corollary \ref{coro}.}}
\end{Rem}
\begin{Rem}
\emph{Notice also that {\bf (H2)} implies that $g$ cannot be everywhere discontinuous. Namely, the property \ref{g-function} of a $g$-function entails :
$$
\forall n\in{\NN}^*, \forall y\in X, \sum_{x_{-n+1}^{0}\in A^n}g_{n}(yx_{-n+1}^{0})=1
$$
therefore $\sum_{B\in\C_n}\sup_Bg_n\geq 1$ which in turn implies that $P_g(X)\geq 0$.}
\end{Rem}
\begin{Ex}\label{exITA}\emph{
This example was presented in \cite{desantis/piccioni/2012} (see Example 2 therein) on $\{-1,+1\}$. Here we adapt it on the alphabet $A=\{0,1\}$. As $\tilde{g}$, the $g$-function we introduce here has a unique discontinuity point along $0_{-\infty}^{0}$, but the dependence on the past does not stop at the last occurrence of a $1$.
Recall that $\ell(x):=\inf\{k\geq 0:x_{-k}=1\}$.
Let $g(0_{-\infty}^{0}1)=\epsilon>0$, and for any $x\neq 0_{-\infty}^{0}$ and any $a\in \{0,1\}$ let
\[
g(xa)=\epsilon+(1-2\epsilon)\sum_{n\geq 1}{\bf 1}\{x_{-\ell(x)-n}=a\}q_{n}^{\ell(x)},
\]
where, for any $l\geq 0$, $(q_{n}^{l})_{n\geq 1}$ is a probability distribution on the integers.
This kernel has a discontinuity at $0_{-\infty}^{0}$ since for each $k\in{\NN}$,
\begin{align*}
g(\ldots1110_{-k}^{0}1)=\epsilon+(1-2\epsilon)\sum_{n\geq 1}q_{n}^{k+1}= 1-\epsilon\neq \epsilon,
\end{align*}
but it is continuous at any other point, since for any $x$ such that $\ell(x)=l<+\infty$, for any $z$ and $k>l$
\begin{align*}
g(\ldots z_{-1}z_{0}x_{-k}^{0}1)=\epsilon+(1-2\epsilon)\left[\sum_{j=1}^{k-l}{\bf 1}\{x_{-l-j}=1\}q_{j}^{l}+\sum_{j\geq k-l+1}{\bf 1}\{z_{k-l+1-j}=1\}q_{j}^{l}\right]
\end{align*}
which converges to $g(x1)=\epsilon+(1-2\epsilon)\sum_{j\geq 1}{\bf 1}\{1=x_{-l-j}\}q_{j}^{l}$. Under some assumptions on the set of distributions $((q_{n}^{l})_{n\geq 1})_{l\geq 0}$, \cite{desantis/piccioni/2012} proved existence, uniqueness and perfect simulation while our Theorem \ref{existence} guarantees existence of a $g$-measure, without any further assumptions on this sequence of distributions.
}
\end{Ex}
Theorem \ref{existence} involves the notion of topological pressure, which is not always easy to extract from the set of discontinuities. We now introduce two simple criteria on the set $\mathcal{D}$ of a $g$-function, that will imply existence.
\begin{Def}For any $n\geq 0$, let us denote $\mathcal{D}^{n}:=\{x_{-n+1}^{0}\}_{x\in\mathcal{D}}$.
The \emph{upper exponential growth rate of $\mathcal{D}$} is
\begin{equation}\label{eq:growth}
\bar{gr}(\mathcal{D}):=\limsup_{n}|\mathcal{D}^{n}|^{1/n}.
\end{equation}
\end{Def}
Although this nomenclature is generally reserved for trees, we use it here as there exists a natural way to represent the set $\mathcal{D}$ as a rooted tree (a subtree of $A^{-\mathbb{N}}$) with the property that each branch, representing an element of $\mathcal{D}$, is infinite, and each node has between $1$ and $|A|$ sons. For instance, in the particular case of $\tilde{g}$ (Example \ref{comb}), the tree is the single branch $0_{-\infty}^{0}$ and $\mathcal{D}^{n}=0_{-n+1}^{0}$.
\begin{Cor}\label{coro}
{Let $g$ be a $g$-function with discontinuity set $\D$. Assume either,
$$
\begin{array}{ll}
{\bf (H1')}&\exists \varepsilon>0, \inf_{X}g=\varepsilon,\\
{\bf (H2')} & \bar{gr}({\mathcal{D}})<[1-(|A|-1)\varepsilon]^{-1}, \\
\end{array}
$$
or
$$
\begin{array}{ll}
{\bf (H1)} & {\exists N\in\NN,\,\exists\epsilon>0,\,\inf_{\E_N}g=\epsilon},\\
{\bf (H2')} & \bar{gr}({\mathcal{D}})<[1-(|A|-1)\varepsilon]^{-1}, \\
{\bf (H3)} & T\D\subset\D,
\end{array}
$$
then there exists at least a $g$-measure and its support is contained in $X\setminus\mathcal{D}$.}
\end{Cor}
Intuitively, Corollary \ref{coro} states that if $\varepsilon$ (which plays the role of a ``non-nullness parameter'' for $g$) is sufficiently large, it may compensate the set of discontinuities of $g$, allowing $g$-measures to exist, with support on the continuity points. Notice that this assumption allows $\mathcal{D}$ to be uncountable, as shown in the following example.
\begin{Ex}\label{ex:3lettres}\emph{
Let $A=\{0,1,2\}$, and consider the function $\ell$ defined as in Examples \ref{exITA} and \ref{comb}. Let also $N_{0}$, $N_{1}$ and $N_{2}$ be three disjoint finite subsets of $\NN$. The $g$-function is defined as follows: for $x\in\{0,2\}^{-\mathbb{N}}$, put $
g(x1)=g(x0)=0.3$, for $x$ such that $\ell(x)\in N_{0}\cup N_{1}\cup N_{2}$, put
\begin{equation}
\begin{array}{ccc}
g(x1)=g(x2)=1/2&\textrm{if}\,\, \ell(x)\in N_{0}\\
g(x0)=g(x2)=1/2&\textrm{if} \,\,\ell(x)\in N_{1}\\
g(x0)=g(x1)=1/2&\textrm{if} \,\,\ell(x)\in N_{2},
\end{array}
\end{equation}
and for any $x$ such that $\ell(x)\in \NN\setminus \{N_{0}\cup N_{1}\cup N_{2}\}$, put
$$
g(x1)=g(x0)=0.26+\sum_{k\geq 1}\theta_{k}x_{-\ell(x)-k},
$$
where $(\theta_{i})_{i\geq 1}$ satisfies $\theta_{i}\geq 0$ and $\sum_{i\geq 1}\theta_{i}<0.03$. Observe that, for any $x\in\{0,2\}^{-\NN}$, $g(\ldots111x_{-k}^{0}1)<0.29$ for any sufficiently large $k$, and therefore does not converge to $0.3$. So $\{0,2\}^{-\NN}\subset \mathcal{D}$. On the other hand, any point $x$ satisfying $\ell(x)\in N_{0}\cup N_{1}\cup N_{2}$ is trivially continuous, and any point $x$ satisfying $\ell(x)\in \NN\setminus\{N_{0}\cup N_{1}\cup N_{2}\}$ is continuous since for any $k>l$ and any $y\in \{0,1,2\}^{-\mathbb{N}}$,
\[
g(\ldots y_{-2}y_{-1}y_{0}x_{-k}^{0}1)=0.26+\sum_{i=1}^{k-l}\theta_{i}x_{-l-i}+\sum_{i\geq k-l+1}\theta_{i}y_{i-k+l-1}
\]
which converges to $0.26+\sum_{i\geq 1}\theta_{i}x_{-l-i}$.
So $\mathcal{D}=\{0,2\}^{-\NN}$ (which is uncountable), $|\mathcal{D}^{n}|=2^{n}$ and consequently $\bar{gr}({\mathcal{D}})=2$. Observe on the other hand that $\inf_{X}g=0$, but there exists $N$ such that $\inf_{\mathcal{E}_{N}}g\geq 0.26$ (any $N>\max(N_{0}\cup N_{1}\cup N_{2})$ will do the job). Thus, the hypothesis of Corollary \ref{coro} are fulfilled since $1-(|A|-1)\varepsilon=0.48<1/2$, and existence holds.}
\end{Ex}
So far, we have focussed on the existence issue. However, \cite{bramson/kalikow/1993} proved that even regular $g$-measures (continuous $g$-measures satisfying {\bf (H1')}) might have several $g$-measures. In view of a result on uniqueness for non-regular $g$-measures, we now give a condition on the set of continuous pasts $X\setminus\mathcal{D}$. To do so, we use the notion of context tree defined below.
\begin{Def}A \emph{context tree} $\tau$ on $A$ is a subset of $\cup_{k\ge0}A^{\{-k,\ldots,0\}}\cup X$ such that for any $x\in X$, there exists a unique element $v\in\tau$ satisfying $a_{-|v|+1}^{0}=v_{-|v|+1}^{0}$.
\end{Def}
For any $g$-function $g$, we denote by $\tau^{g}$ the smallest context tree containing $\mathcal{D}$, called the \emph{skeleton} of $g$.
For instance, coming back to example \ref{comb}, $\tau^{\tilde{g}}=\cup_{i\ge0}\{10^{i}\}\cup\{0_{-\infty}^{0}\}$ and is represented on Figure \ref{fig:peigne}. It is also the skeleton of any $g$-function having only $0_{-\infty}^{0}$ as discontinuity point, such as the $g$-function introduced in Example \ref{exITA}. Pictorially, any $g$-function can be represented as a set of transition probabilities associated to each leaf of the complete tree $A^{-\mathbb{N}}$ and $\tau^{g}$ is the smallest subtree of $A^{-\mathbb{N}}$ which contains $\mathcal{D}$, such that every node has either $|A|$ or $0$ sons. On Figure \ref{fig:arvoregeral} is drawn the (upper part of) the complete tree corresponding to some function $g$ having complicated sets $\mathcal{D}$ and $\tau^{g}$. \\
\begin{figure}
\centering
\includegraphics[scale=0.9]{peigne.pdf}
\caption{The skeleton of the function $\tilde{g}$.}
\label{fig:peigne}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1.5]{arvoregeral.pdf}
\caption{An example of set $\mathcal{D}$ (bold black line) for some $g$-function $g$. The black lines represent the context tree $\tau$ corresponding to $\mathcal{D}$ (skeleton of $g$), and the grey lines represent the remaining complete tree. The branches that are not bold black are continuous points for $g$. We can see that $|\mathcal{D}^{1}|=2$, $|\mathcal{D}^{2}|=3$, $|\mathcal{D}^{3}|=4$, $|\mathcal{D}^{4}|=6$, $|\mathcal{D}^{5}|=7$, $|\mathcal{D}^{6}|=8$, ...}
\label{fig:arvoregeral}
\end{figure}
Let us introduce the $n$-variation of a point $x\in X$ that quantifies the rate of continuity of $g$ as
\[
\textrm{var}_{n}(x):=\sup_{y_{-n}^0=x_{-n}^0}|g(y)-g(x)|.
\]
Notice that $\textrm{var}_{n}(x)$ converges to $0$ if and only if $x$ is a continuity point for $g$. As $\textrm{var}_{n}(x)$ actually only depends on $x_{-n}^{0}$, the notation $\textrm{var}_{n}(x_{-n}^{0})$ will sometimes be used.
Now, observe that the set of continuous pasts of a given $g$-function $g$ is the set of pasts $x_{-\infty}^{0}$ such that there exists $v\in\tau^{g}$, $|v|<+\infty$ with $x_{-|v|+1}^{0}=v_{-|v|+1}^{0}$. In particular, for any $v\in \tau^{g}$ with $|v|<+\infty$,
\[
\textrm{var}^{v}_{n}:=\sup_{x,x_{-|v|+1}^{0}=v}\textrm{var}_{n}(x)\stackrel{n\rightarrow+\infty}{\longrightarrow}0.
\]
For any $v\in\tau^{g}$, $|v|<+\infty$, let $R_{v}:=\sum_{n\ge|v|}[\textrm{var}^{v}_{n}]^2$.
Our assumption on the set of continuous pasts $X\setminus\mathcal{D}$ is
\begin{equation*}
{\bf (H4)}\,\,\,\sum_{v\in\tau^{g},|v|<\infty}\mu(v)R_{v}<+\infty.
\end{equation*}
Observe that {\bf (H4)} implies that $R_{v}<+\infty$ for any $v\in\tau$.
\begin{Theo}\label{theo:ExistsUniqueMixing}
Suppose that we are given a $g$-function $g$ satisfying {\bf (H1)}, {\bf (H2)} and {\bf (H4)}, then there exists a unique $g$-measure for $g$.
\end{Theo}
\begin{Rem}
{In this theorem, hypothesis {\bf (H1)} and {\bf (H2)} are mainly used to get the existence of a $g$-measure. Therefore, thanks to Corollary \ref{coro}, the same conclusion holds either assuming {\bf (H1')}, {\bf (H2')} and {\bf (H4)} or {\bf (H1)}, {\bf (H2')}, {\bf (H3)} and {\bf (H4)}.}
\end{Rem}
This result is to be compared to the results of \cite{JO}, which state, in particular, that uniqueness holds when $\textrm{var}_{n}:=\sup_{x_{-n}^{0}}\textrm{var}_{n}(x_{-n+1}^{0})$ is in $\ell^{2}$. In fact, this is mainly what is assumed here, but only on the set of continuous pasts, which has full $\mu$-measure. This is formalised through the more complex hypothesis {\bf (H4)}. We now come back to Examples \ref{exITA} and \ref{ex:3lettres} in order to illustrate Theorem \ref{theo:ExistsUniqueMixing}.
\begin{Exe}[Continued]
\emph{
In this example, we have as skeleton $\tau^{g}=0_{-\infty}^{0}\cup_{i\geq 0}\{10^{i}\}$, so that any $v\in\tau^{g}$ with $|v|=k<\infty$ writes $v=10^{k-1}$ and simple calculations yield, for any $n\geq k$
\[
\textrm{var}^{v}_{n}=(1-2\epsilon)\sum_{i\geq n-k+1}q^{k}_{i}.
\]
Hypothesis {\bf (H4)} is satisfied as soon as
\[
\sum_{k\geq 1}(1-\epsilon)^{k}\sum_{n\geq k+1}\left[\sum_{i\geq n-k+1}q^{k}_{i}\right]^{2}<+\infty.
\]
For instance, if for any $k\geq 1$, $(q^{k}_{i})_{i\ge1}$ is the geometric distribution with parameter $\alpha^{k}$, where $1-\epsilon<\alpha<1$, then
\[
\sum_{k\geq 1}(1-\epsilon)^{k}\sum_{n\geq 1}\left[\sum_{i\geq n+1}q^{k}_{i}\right]^{2}\leq \sum_{k\geq 1}[(1-\epsilon)\alpha^{-1}]^{k},
\]
which is summable. So we have uniqueness for this kernel.
}
\end{Exe}
\begin{Exem}[Continued]
\emph{
The skeleton of $g$ is
$$
\tau^{g}=\{0,2\}^{-\mathbb{N}}\cup\{1\}\cup \bigcup_{i\geq 0}\bigcup_{x_{-i}^0\in\{0,2\}^{i+1}}\{1x_{-i}^{0}\}
$$
and for any $v\in\tau^{g}$, $|v|<\infty$,
\[
\textrm{var}^{v}_{n}\leq2\sum_{i\geq n-|v|}\theta_{i}\,,\,\,\forall n>|v|.
\]
Since this upper bound does not depend on the length of the string $|v|$, it follows that Hypothesis {\bf (H4)} is satisfied if $\sum_{n\geq 1}\left[\sum_{i\geq n}\theta_{i}\right]^{2}<+\infty$.}
\end{Exem}
\section{Proof of Theorem \ref{existence}}
Let us define the Perron Frobenius operator $L$ acting on measurable functions $f$ as follows:
$$
Lf(x)=\sum_{a\in A}g(xa)f(xa)=\sum_{x=T(y)}g(y)f(y)
$$
For $\mu\in\M$, let $L^*$ denote the dual operator, that is
$$
L^*\mu(f)=\mu(Lf)
$$
for any $f\in\B$. The relation between $L^*$ and the $g$-measures is enlightened by the following result.
\begin{Prop}\label{Ledrappier}(\cite{Ledrappier})
$\mu$ is a g-measure if and only if $\mu$ is a probability measure and $L^*\mu=\mu$
\end{Prop}
In view of Proposition \ref{Ledrappier}, the strategy of the proof will be to find a fixed point for $L^*$.
When $g$ is a continuous function, the operator $L$ acts on $\C$ and $L^*$ acts on $\M$, the existence of a g-measure $\mu$ is then a straightforward consequence of the Schauder-Tychonoff theorem.
If $g$ is not continuous, $L$ does not preserve the set of continuous functions. More precisely, if $\D$ is the set of discontinuities of $g$ and $f$ is continuous, then the set of discontinuities of $Lf$ is $T\D$. Still, as $g$ is bounded, $L$ acts on the space $\B$ of bounded functions. More precisely $\Vert Lf\Vert\le\Vert f\Vert$, where $\Vert\ .\ \Vert$ is the uniform norm.
Therefore $L^*$ acts on $\B'$, the topological dual space of $\B$ i.e.
$$
L^*\alpha(f)=\alpha(Lf)
$$
for all $\alpha\in \B'$ and $f\in \B$.
Firstly, the existence of a fixed point $\Lambda\in{\B}'$ for $L^*$ will be proved. Then the hypothesis {\bf (H1)} and {\bf (H2)} will be shown to imply $\mu({\D})=0$ and $\mu(T{\D})=0$, where $\mu$ is the restriction of $\Lambda$ to the continuous functions. Finally, we will use these two equalities to prove that $\mu$ is indeed a $g$-measure.
\begin{Prop}
There exists a positive functional $\Lambda\in \B'$ with $\Lambda(\1)=1$ such that $L^*\Lambda=\Lambda$.
\end{Prop}
\begin{proof}
Consider the following subset $C$ of $\B'$
$$
C=\{\alpha\in \B', \alpha(\1)=1\ \ \mbox{and}\ \ \alpha(f)\geq 0\ \ \mbox{for all}\ \ f\geq 0\}.
$$
We consider the weak star topology on $\B'$ and $C$. In order to apply Schauder-Tychonoff theorem (\cite{Dunford-Schwartz} V.10.5), it is needed that $L^*$ is well defined and continuous for the weak star topology, that $C$ is compact for this topology, non empty and convex (the two last properties are straightforward). The continuity of $L^*$ is given by a simplification of the proof in \cite{BPS}. The compactness of $C$ follows from Banach-Alaoglu theorem (\cite{Dunford-Schwartz} V.4.2), as $C$ is a closed subset of the unit ball of $\B'$.
\end{proof}
Since $\Lambda_{\mid \C}$ is a positive linear form on $\C$, the Riesz
representation theorem implies that there exists $\mu$, a positive Borel
measure, such that:
$$\forall f\in \C:\ \ \Lambda(f)=\mu(f).$$
In particular, $\mu(\1)=\Lambda(\1)=1$ so that
$\mu$ is a probability measure.
For all $f\in \C$,
$\Lambda(Lf)=\Lambda(f)=\mu(f)$. But $Lf$ is not
necessarily continuous at points
of $T\D$. Notice though that if $f\in\C$ and $Lf\in\C$ then $\mu(f)=\mu(Lf)$. What remains to prove is that this is true for any $f\in\C$.
Two more lemmas are needed to go on further in the proof.
\begin{Lem}
$$
P_g(T\D)\le P_g(\D)
$$
\end{Lem}
\begin{proof}
By definition:
$$
P_{g}(T\D)=\limsup_{n\to\infty}\frac{1}{n}\log
\sum_{{B\in\C_{n}}\atop{B\cap T\D\neq\emptyset}}\sup_{B}g_{n}.
$$
Let $B\in\C_{n}$ such that $B\cap T\D\neq\emptyset$, there
exists $C\in \C_{n+1}$ such
that $C\cap\D\neq\emptyset$. More precisely, there exists $a\in A$ such that $C=C_1(a)\cap T_a^{-1}(B)$.
Moreover, let $x\in B$, then $T_a^{-1}(x)\in\E_n$ and
$$
g_{n}(x)\le\frac{g_{n+1}(T_{a}^{-1}(x))}{g(T_{a}^{-1}(x))}\le\frac{1}{\inf_{\E_n}
g}\sup_{C}g_{n+1}.
$$
Since $\E_{n+1}\subset \E_n$, $\sup_{B}g_{n}\le\frac{1}{\inf_{\E_N} g}\sup_{C}g_{n+1}$ for $n\geq N$. Recall that $\inf_{\E_N} g>0$ by
hypothesis {\bf(H1)}. It comes, for $n\geq N$:
$$
\sum_{{B\in\C_{n}}\atop{B\cap T\D\neq\emptyset}}\sup_{B}g_{n}\le\frac{1}{\inf_{\E_N} g}
\sum_{{C\in\C_{n+1}}\atop{C\cap\D\neq\emptyset}}\sup_{C}g_{n+1}
$$
and thus:
\begin{eqnarray*}
\limsup_{n\to\infty}\frac{1}{n}\log\sum_{{B\in\C_{n}}\atop{
B\cap T\D\neq\emptyset}}\sup_{B}g_{n} \le
\lim_{n\to\infty}\frac{1}{n}\log\frac{1}{\inf_{\E_N}
g}+\limsup_{n\to\infty}\frac{1}{n+1}\log
\sum_{{C\in\C_{n+1}}\atop{C\cap\D\neq\emptyset}}\sup_{C}g_{n+1}.
\end{eqnarray*}
\end{proof}
\begin{Lem}\label{regular}
For all borel sets B,
$$
\mu(B)\le\inf\{\Lambda(O),\ O\ \mbox{open}, \,O\supset B\}
$$
\end{Lem}
\begin{proof}
Since $\mu$ is a regular measure (as a Borel measure on a compact set):
$$
\mu(B)=\inf\{\mu(O),\ O\ \mbox{open},\ O\supset B\}.
$$
Let us fix an open set $O$ and show that: $\mu(O)\le\Lambda(O)$, this will
prove the lemma.
Take
$\varepsilon>0$. Using again the regularity of $\mu$, there exists
$K_{\varepsilon}$, a compact
subset of $O$, such that:
$$
\mu(O)<\mu(K_{\epsilon})+\varepsilon.
$$
Let $f_{\varepsilon}:X\to[0,1]$ be continuous and such that:
$$\left\{\begin{array}{l}
f_{\varepsilon}=1\ \ in\ \ K_{\varepsilon}\\
f_{\varepsilon}=0\ \ in\ \ O^{c}\\
f_{\varepsilon}\le 1\ \ in\ \ O\setminus K_{\varepsilon}.
\end{array}
\right.$$
On one hand, $f_{\varepsilon}\le \1_{O}$ so that
$$
\mu(f_{\varepsilon})=\Lambda(f_{\varepsilon})\le\Lambda(O)
\hbox{ and }
\sup_{\varepsilon>0}\mu(f_{\varepsilon})\le\Lambda(O).
$$
On the other hand,
$\mu(f_{\varepsilon})\geq\mu(K_{\varepsilon})>\mu(O)-\varepsilon$ so that:
$$
\mu(O) < \mu(K_\varepsilon)+ \varepsilon \leq \mu(f_\varepsilon) +
\varepsilon
$$
and $\mu(O)\leq\sup_{\varepsilon>0}\mu(f_{\varepsilon})\leq\Lambda(O)$.
\end{proof}
Now, we claim the following:
\begin{Lem}\label{zero_measure}
$$\mu(\D)=0\ \ \mbox{and}\ \ \mu(T\D)=0.$$
\end{Lem}
\begin{proof}
The claim will follow from Lemma \ref{regular} if we can find open neighborhoods $V$
of $\D$ and $W$ of $T\D$ with
$\Lambda(V)$ and $\Lambda(W)$ arbitrarily small. Let us write the proof for $\D$. The same scheme will work for $T\D$.
Recall that $\C_n(\D)=\cup\{C\in\C_{n}, C\cap\D\neq\emptyset\}$.
Using the fixed point property of $\Lambda$ and the definition of pressure, we get, for any $\delta>0$, $N(\delta)$
such that, for all $n>N(\delta)$:
$$
\Lambda(\C_n(\D))=\Lambda(L^n\1_{\C_n(\D)})
\le\sum_{C\in \C_n(\D)}\sup_C g_{n}
\le (e^{P_g(\D)+\delta})^n.
$$
Taking $\delta=-P_{g}(\D)/2$,
which
is positive by the main hypothesis {\bf (H2)}, we get $\lim_{n\to\infty}\Lambda(\C_n(\D))=0$
and for every $n$, $\C_n(\D)$ is an open neighbourhood of $\D$.
\end{proof}
Finally, the proof of the main theorem writes as follows :
\begin{proof}
Fix $f\in C(X)$ non-negative.
Since $\mu$ is regular (as a Borel measure on a compact set) and as $\mu(\D)=\mu(T\D)=0$ (lemma \ref{zero_measure}), for each $\varepsilon>0$, there exist $U_{\varepsilon}$ open neighbourhood of $\D$ and $V_{\varepsilon}$ open neighbourhood of $T\D$ such that $\mu(U_{\varepsilon})<\varepsilon$ and $\mu(V_{\varepsilon})<\varepsilon$. Let $W_{\varepsilon}=U_{\varepsilon}\cap T^{-1}V_{\varepsilon}$. This is also a neighbourhood of $\D$ such that $\mu(W_{\varepsilon})<\varepsilon$. Moreover, as $TW_{\varepsilon}\subset V_{\varepsilon}$, it comes $\mu(TW_{\varepsilon})<\varepsilon$.
Consider now $f_{\varepsilon}$ with compact support in $X\setminus \D$
such that:
$$\left\{\begin{array}{l}
f_{\varepsilon}=f\ \ in\ \ X\setminus W_{\varepsilon}\\
f_{\varepsilon}\le f\ \ in\ \ W_{\varepsilon}.
\end{array}
\right.$$
First, $Lf_{\varepsilon}$ is continuous on $X$. Namely, $f_{\varepsilon}$ is continuous on $X$ so $Lf_{\varepsilon}$ is on $X\setminus T\D$. Now, if $x\in T\D$, it may be easily checked that the potentially discontinuous part of $Lf_{\varepsilon}$ actually vanishes.
This continuity implies $\mu(Lf_{\varepsilon})=\mu(f_{\varepsilon})$ and
\begin{eqnarray*}
\vert\mu(Lf)-\mu(f)\vert
&=&\vert\mu(Lf_{\varepsilon})+\mu(L(f-f_{\varepsilon}))-\mu(f)\vert
\\
&=&\vert\mu(f_{\varepsilon}-f)+\mu(L(f-f_{\varepsilon}))\vert\\
&\le&\vert 2\Vert f\Vert\mu(W_{\varepsilon})+\mu(L(f-f_{\varepsilon}))\vert.
\end{eqnarray*}
We need to show that $\mu(L(f-f_{\varepsilon}))$ is small. By definition of $f_{\varepsilon}$,
$$
L(f-f_{\varepsilon})(x)=\sum_{a\in A}(f-f_{\varepsilon})(ax)g(ax)\1_{W_{\varepsilon}}(ax)
$$
therefore
\begin{eqnarray*}
\mu(L(f-f_{\varepsilon})) &\le& \Vert g\Vert\ \Vert f-f_{\varepsilon}\Vert\sum_{a\in A}\mu(\1_{W_{\varepsilon}}\circ T_a^{-1}) \\
&\le& 2\Vert g\Vert\ \Vert f\Vert \sum_{a\in A}\mu(T_a(W_{\varepsilon})) \\
&\le& (2\Vert g\Vert\ \Vert f\Vert |A|)\mu(TW_{\varepsilon}).
\end{eqnarray*}
Letting $\varepsilon$ go to zero gives $\mu(Lf)=\mu(f)$.
\end{proof}
\section{Proofs of Corollary \ref{coro} and Theorem \ref{theo:ExistsUniqueMixing}}
\begin{proof}[Proof of Corollary \ref{coro} using $\{{\bf (H1')},{\bf (H2')}\}$]
In view of Theorem \ref{existence}, it is enough to show that hypothesis {\bf (H1')} and {\bf (H2')} imply {\bf (H2)}.
Under hypothesis {\bf (H1')}
\begin{equation}\label{eq1}
g_{n}(x)\leq (1-(|A|-1)\varepsilon)^{n}\,\,\textrm{for any}\,\,\,x.
\end{equation}
It follows that
\[
P_{g}(\mathcal{D})\leq \limsup_{n\rightarrow+\infty}\frac{1}{n}\log |\mathcal{D}^{n}|(1-(|A|-1)\varepsilon)^{n}.
\]
Now, under {\bf (H2')}, there exists $\alpha\in(0,1)$ such that $|\mathcal{D}^{n}|\leq(\frac{1}{1-(|A|-1)\varepsilon})^{n(1-\alpha)}$ for any sufficiently large $n$. Thus,
\begin{align*}
P_{g}(\mathcal{D})&\leq\limsup_{n\rightarrow+\infty}\frac{1}{n}\log (1-(|A|-1)\varepsilon)^{-n(1-\alpha)}(1-(|A|-1)\varepsilon)^{n}\\&=\limsup_{n\rightarrow+\infty}\frac{1}{n}\log (1-(|A|-1)\varepsilon)^{n\alpha}\\&=\alpha\log (1-(|A|-1)\varepsilon)<0.
\end{align*}
\end{proof}
\begin{proof}[Proof of Corollary \ref{coro} using $\{{\bf (H1)},{\bf (H2')},{\bf (H3)}\}$]
In view of Theorem \ref{existence}, it is enough to show that hypothesis {\bf (H1)}, {\bf (H2')} and {\bf (H3)} imply {\bf (H2)}.
Under {\bf (H1)}
\begin{align*}
\forall n\geq N+1, \forall x\in\E_n, g(x)\le 1-(\vert A\vert-1)\varepsilon.
\end{align*}
Take $B\in\C_n(\D)$ and $x\in B$. Hypothesis{\bf (H3)} implies that $T^ix\in\E_{n-1-i}\subset\E_N$ for all $i\in\{1,\ldots,n-N-1\}$.
Therefore the identity $g_n(x)=g_{n-N}(x)g_N(T^{n-N}x)$ entails for $n\geq N+1$
\begin{equation}\label{eq2}
\forall B\in\C_n(\D), \forall x\in B, g_{n}(x)\leq (1-(|A|-1)\varepsilon)^{n-N}.
\end{equation}
It follows that
\[
P_{g}(\mathcal{D})\leq \limsup_{n\rightarrow+\infty}\frac{1}{n}\log |\mathcal{D}^{n}|(1-(|A|-1)\varepsilon)^{n-N}.
\]
The rest of the proof runs as before, using hypothesis ${\bf (H2')}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo:ExistsUniqueMixing}]
We already now that existence holds, thanks to hypothesis ${\bf (H1)}$ and ${\bf (H2)}$.
Remark 2 in \cite{JO} states that, if for some stationary $\mu$ we have
\[
\int_{X}\mu(dx)\sum_{n}[\textrm{var}_{n}(x)]^{2}<+\infty,
\]
then $\mu$ is unique. Notice that although \cite{JO} deal with continuous $g$-functions throughout the paper, their uniqueness result only requires existence of a $g$-measure, which is what we have here.
For any point $x\in X$, the sequence $(\sum_{n=0}^{N}[\textrm{var}_{n}(x)]^{2})_{N\ge0}$ is monotonically increasing and positive, therefore
\begin{align*}
\int_{X}\mu(dx)\sum_{n}[\textrm{var}_{n}(x)]^{2}&=\lim_{N}\int_{X}\mu(dx)\sum_{n=0}^{N}[\textrm{var}_{n}(x)]^{2}\\
&=\lim_{N}\sum_{n=0}^{N}\int_{X}\mu(dx)[\textrm{var}_{n}(x)]^{2}\\
&=\sum_{n}\int_{X}\mu(dx)[\textrm{var}_{n}(x)]^{2}\\
&=\sum_{n}\sum_{x_{-n}^{0}\in A^{n+1}}\mu(x_{-n}^{0})[\textrm{var}_{n}(x_{-n}^{0})]^{2}\,,
\end{align*}
where we used the Beppo-Levi Theorem in the first line, and the fact that $\textrm{var}_{n}(x)$ only depends on $x_{-n}^{0}$ in the last line. We now divide into two parts as follows:
\begin{align*}
\int_{X}\mu(dx)\sum_{n}[\textrm{var}_{n}(x)]^{2}&=\sum_{n}\left[\sum_{x_{-n}^{0}\in\mathcal{D}^{n+1}}\mu(x_{-n}^{0})[\textrm{var}_{n}(x_{-n}^{0})]^{2}\right.\\
&\hspace{2cm}+\left.\sum_{x_{-n}^{0}\in A^{n+1}\setminus\mathcal{D}^{n+1}}\mu(x_{-n}^{0})[\textrm{var}_{n}(x_{-n}^{0})]^{2}\right]
\end{align*}
For the first term of the right-hand side of the equality, we majorate $\textrm{var}_n(x_{-n}^0)$ by 1 and we use the fixed point property of the $g$-measure $\mu$ to obtain, for any $\delta >0$, the existence of $N(\delta)$ such that for all $n> N(\delta)$:
\begin{align*}
\sum_{x_{-n}^{0}\in\mathcal{D}^{n+1}}\mu(x_{-n}^{0})&=\mu(\C_{n+1}(\D))=\mu(L^{n+1}\1_{\C_{n+1}(\D)})\\
&\le\sum_{C\in\C_{n+1}(\D)}\sup_Cg_{n+1}\le(e^{P_g(\D)+\delta})^{n+1}.
\end{align*}
Taking $\delta=-P_g(\D)/2$ (which is strictly positive by hypothesis ${\bf (H2)}$) proves
$$\sum_n\sum_{x_{-n}^{0}\in\mathcal{D}^{n+1}}\mu(x_{-n}^{0})<\infty.$$
It remains to consider the second term. Recall that if $x_{-n}^{0}\in A^{n+1}\setminus\mathcal{D}^{n+1}$ then there exists $v\in\tau^g$ with $|v|\le n$ such that $v$ is a prefix of $x_{-n}^{-1}$ (denoted by $x_{-n}^{-1}\geq v$). It comes, using {\bf (H4)},
\begin{align*}
\sum_{n}\sum_{x_{-n}^{0}\in A^{n+1}\setminus\mathcal{D}^{n+1}}\mu(x_{-n}^{0})[\textrm{var}_{n}(x_{-n}^{0})]^{2}&=\sum_{n}\sum_{v\in\tau^g:|v|\le n+1}\sum_{x_{-n}^{0}\geq v}\mu(x_{-n}^{0})[\textrm{var}_{n}(x_{-n}^{0})]^{2}\\
&=\sum_{n}\sum_{v\in\tau^g:|v|\le n+1}\mu(v)(\textrm{var}_{n}^{v})^{2}\\
&=\sum_{n}\sum_{v:|v|=n}\mu(v)R_{v}\\&=\sum_{v\in\tau^g:|v|<+\infty}\mu(v)R_{v}<+\infty.
\end{align*}
\end{proof}
\section{Questions and perspectives}
Notice that existence is ensured by an assumption on the set of discontinuous pasts, whereas uniqueness involves a condition on the set of continuous pasts. For continuous chains, \cite{JO} obtained conditions on the continuity rate of the kernel, ensuring uniqueness. Making the necessary changes in the hypothesis, Theorem \ref{theo:ExistsUniqueMixing} states that for discontinuous kernel, the same kind of conditions can be used but restricted to the set of continuous pasts, when the measure does not charge the discontinuous pasts.
Concerning mixing properties,
it is known (using the results of \cite{Comets} for example) that chains having summable continuity rate enjoy summable $\phi$-mixing rate. It is natural to expect that, like for the problem of uniqueness, the chains we consider will enjoy the same mixing properties under the same assumption, restricted to the set of continuous pasts.
Finally, it is worth mentioning an interesting parallel with the literature of non-Gibbs state. In this literature, there are examples of stationary measures that are not \emph{almost-Gibbs}, meaning that there exists stationary measures that give positive weight to the set of discontinuities with respect to both past and future. We do not enter into details and refer to \cite{maes/redig/vanMoffaert/leuven/1999} for the definition of this notion. As far as we know, no such example exist in the world of $g$-measures. More precisely, an interesting question is whether there exist examples of stationary $g$-measures that are not \emph{almost-regular}, or if, on the contrary, $\mu({\D})=0$ is valid for every stationary $g$-measure.\\
\noindent{\bf Ackowledgement} We gratefully acknowledge X. Bressaud for interesting discussions during the Workshop Jorma's Razor II.
\bibliographystyle{agsm}
\bibliography{paccaut}
\vskip 10pt
\noindent Sandro Gallo \\
{\sc Instituto de Matem\'atica, Universidade Federal de Rio de Janeiro} \\
{\tt sandro@im.ufrj.br}
\vskip 10pt
\noindent Fr\'ed\'eric Paccaut \\
{\sc Laboratoire Ami\'enois de Math\'ematiques Fondamentales et Appliqu\'ees cnrs umr 7352,} \\
{\sc Universit\'e de Picardie Jules Verne} \\
{\tt frederic.paccaut@u-picardie.fr}
\end{document} | 0.001345 |
This item from New Hampshire Antique Co-op is now SOLD.
More from New Hampshire Antique Co-op
Previous Next
$1,950 USD
$2,650 USD
$9,500 USD
$1,800 USD
$1,400 USD
$3,950 USD
$3,500 USD
$2,900 USD
Here are similar items currently available on Ruby Lane for your consideration
Pair Antique Early 20th century Architectural Terracotta Garden Ornaments or Gate Post Finial Balls
Classic Tradition
$985 USD
Remarkable US Navy Ditty Box with Fascinating Contents, late 19th–early 20th Century
The Discerning Collector
$950 USD OFFER
Early 20th Century Apothecary Scale
Bekesby's Antiques, etc.
$350 USD SALE OFFER
Early 20th Century Oak Stool
Bekesby's Antiques, etc.
$250 USD SALE OFFER
Large Art Nouveau Planter, Early 20th Century
Islamic Art Miracles
$1,300 USD
Early 20th Century Bronze Eagle Statue
The Art Deck, "Gallery in a Box" (reg. tm)
$975 USD
Antique Red Kashan Carpet, Early 20th Century
Islamic Art Miracles
$790 USD SALE
Chinese Lacquer Box with Crane and Peach early 20th Century
Bear & Raven
$150 USD OFFER
19th Century Bavarian Officials Officers Bicorne Hat
The Antiques Storehouse
~ $428 USD
Late 19th, Early 20th Century Beijing Glass Snuff Bottle
Hammers Golden Peacock
$475 USD
Item ID: 4520
A solid two-sided wooden painted advertising sign with metal edges, which reads “US Post Office, Germania, Penna” on both sides. Probably dates to the early 20th century. Excellent overall condition, with great patina, minor imperfections, edge paint losses, and expected wear from age and use. Dimensions: 27” h x 29 ½” w x 2” d.
If you were the pending buyer of this item, go to My Account to view, track and check payment for this item.
Early 20th Century Germania PA US Post Office Hanging Sign | 0.000893 |
tag:blogger.com,1999:blog-3580069.post1615895126596366760..comments2013-04-18T04:59:04.232-07:00Comments on Analytics Blog: Mark your calendar to ‘Hangout on Air’ and learn how to build a mobile site in minutesNick Have My Blog Amaan Khan's Blog I want To Ad...I Have My Blog <a href="" rel="nofollow">Amaan Khan's Blog</a> <br />I want To Add Another Profile. How Can I Do this ??Amaan Khan's Blog, one more imp initiative from google ...Unbelievable, one more imp initiative from google to help niche <br />bloggers , <br />I have tried it for my blog <br /> at <br /> and could <br />perfectly create one for me , now <br />researching that how to make it live.Mahender's very easy to make mobile sites.it's very easy to make mobile sites.Garavi Gujarat | 0.001401 |
Simona Modreanu was born in 1962 in Iaşi, Romania. She studied French and English Philology as well as European Studies in Iaşi and Nancy and got a joint PhD from the University of Iaşi and Paris VII in 1997 with her lauded thesis on Cioran. She teaches at the University of Iaşi since 2008 as a professor and is a member of a number of academic boards in Romania and France. Simona Modreanu is the author of seven monographs and has translated several books from French, including those of François Brune, Thierry Magnin, Jean-Jacques Nattiez, Paul Morand, and Marius Daniel Popescu. Her work has received many awards. Simona Modreanu is a Knight of the Order of Academic Palms. | 0.710054 |
\begin{document}
\title[The Explicit formulae for the scaling limits]{The Explicit formulae for scaling limits in the
ergodic decomposition of infinite Pickrell measures }
\begin{abstract}
{The main result of this paper, Theorem \ref{main-result}, gives explicit formulae for the kernels of the ergodic decomposition measures for infinite Pickrell measures on spaces of infinite complex matrices. The kernels are obtained as the scaling limits of Christoffel-Uvarov deformations of Jacobi
orthogonal polynomial ensembles.
}\end{abstract}
\author{ Alexander I. Bufetov
, Yanqi Qiu }
\address{A.B.:Institut de Math\'ematiques de Marseille, Aix-Marseille Universit{\'e}, CNRS, Marseille}
\address{Steklov Institute of Mathematics,
Moscow}
\address{Institute for Information Transmission Problems,
Moscow}
\address{National Research University Higher School of Economics,
Moscow}
\address{Rice University, Houston TX}
\address{Y. Q.: Institut de Math\'ematiques de Marseille, Aix-Marseille Universit{\'e}, Marseille}
\maketitle
\section{Introduction.}
\subsection{Outline of the main results}
\subsubsection{Pickrell measures}
We start by recalling the definition of Pickrell measures \cite{Pickrell90}.
Our presentation follows \cite{Bufetov_inf-deter}.
Given a parameter $s \in \R$ and a natural number $n$, consider
a measure $\mu_n^{(s)}$ on the space $\text{Mat}(n, \C)$ of $n\times n$-complex matrices, given by the formula
\begin{equation}\label{pick-def}
{\mu}_n^{(s)}=\mathrm{const}_{n,s}\det(1+{z}^*{z})^{-2n-s}dz.
\end{equation}
Here $dz$ is the Lebesgue measure on the space of matrices, and $\mathrm{const}_{n,s}$ a normalization constant whose choice will be explained later. Note that the measure $\mu_n^{(s)}$ is finite if $s>-1$ and infinite if $s\leq -1$.
If the constants $\mathrm{const}_{n,s}$ are chosen appropriately, then the sequence of measures (\ref{pick-def}) has
the Kolmogorov property of consistency under natural projections: the push-forward of the measure
${\mu}_{n+1}^{(s)}$ under the natural projection of cutting the $n\times n$-corner of a $(n+1)\times (n+1)$-matrix is precisely the measure ${\mu}_n^{(s)}$. This consistency property also holds for infinite
measures provided $n$ is sufficiently large. The consistency property and the Kolmogorov Existence Theorem allows one to define the Pickrell measure $\mu^{(s)}$ on the space of infinite complex matrices $\text{Mat}(\N, \C)$, which is finite if $s>-1$ and infinite if $s\leq -1$.
Let $U(\infty)$ be the infinite unitary group $$U(\infty) = \bigcup_{n\in\N} U(n),$$ and let $G = U(\infty) \times U(\infty)$. Groups like $U(\infty)$ or $G$ are considered as nice ``big groups'', they are non-locally compact groups, but are the inductive limits of compact ones.
The space $\text{Mat}(\N, \C)$ can be naturally considered as a $G$-space given by the action \begin{align*} T_{u_1, u_2}( z) = u_1 z u_2^*, \text{ for } (u_1, u_2) \in G, z \in \text{Mat}(\N, \C).\end{align*} By definition, the Pickrell measures are $G$-invariant. The ergodic decomposition of Pickrell measures with respect to $G$-action was studied in \cite{BO-infinite-matrices} in finite case and \cite{Bufetov_inf-deter} in infinite case. The ergodic $G$-invariant probability measures on $\text{Mat}(\N, \C)$ admit an explicit classification
due to Pickrell \cite{Pickrell90} and to which Olshanski and Vershik \cite{Olsh-Vershik} gave a different approach: let $\mathfrak{M}_{\text{erg}} (\text{Mat}(\N, \C))$ be the set of ergodic probability measures and define the Pickrell set by \begin{align*} \Omega_P = \left\{ \omega = (\gamma, x) : x = (x_1 \ge x_2 \ge \dots \ge x_i \ge \dots \ge 0), \sum_{i = 1}^\infty x_i \le \gamma\right\},\end{align*} then there is a natural identification: $$\begin{array}{ccc} \Omega_P & \leftrightarrow & \mathfrak{M}_{\text{erg}} (\text{Mat}(\N, \C)) \\ \omega & \leftrightarrow & \eta_\omega\end{array}.$$
Set \begin{align*} \Omega_P^{0} : = \left\{ \omega = (\gamma, x) \in \Omega_P: x_i > 0 \text{ \, for all $i$, and \, } \gamma = \sum_{i = 1}^\infty x_i \right\}. \end{align*}
The finite Pickrell measures $\mu^{(s)}$ admit the following unique ergodic decomposition \begin{align}\label{erg-dec} \mu^{(s)} = \int_{\Omega_P} \eta_\omega d \overline{\mu}^{(s)} (\omega). \end{align}
Borodin and Olshanski \cite{BO-infinite-matrices} proved that the decomposition measures $\overline{\mu}^{(s)}$ live on $\Omega_P^0$, i.e., $\overline{\mu}^{(s)} (\Omega_P\setminus \Omega_P^0) = 0 $. Let $\mathbb{B}^{(s)}$ denote the push-forward of the following map: $$\begin{array}{cccc} \text{conf}: & \Omega_P^0 & \rightarrow & \text{Conf} (( 0, \infty)) \\ & \omega & \mapsto & \{ x_1, x_2, \dots, x_i, \dots \} \end{array}. $$ The above $\overline{\mu}^{(s)}$-almost sure bijection identifies the decomposition measure $\overline{\mu}^{(s)}$ on $\Omega_P$ with the measure $\mathbb{B}^{(s)}$ on $\text{Conf}((0, \infty))$, for this reason, the measure $\mathbb{B}^{(s)}$ will also be called the decomposition measure of the Pickrell measure $\mu^{(s)}$. It is showed that $\mathbb{B}^{(s)}$ is a determinantal measure on $\text{Conf}((0, \infty))$ with correlation kernel \begin{align}\label{bessel-kernel-mod} J^{(s)} (x_1,x_2) = \frac{1}{x_1x_2} \int_0^1 J_s\left(2\sqrt{\frac{t}{x_1}}\right) J_s\left(2\sqrt{\frac{t}{x_2}}\right)dt .\end{align} The change of variable $y = 4/x$ reduces the kernel $J^{(s)}$ to the well-known kernel $\widetilde{J}^{(s)}$ of the Bessel point process of Tracy and Widom in \cite{Tracy-Widom94}: \begin{align*} \widetilde{J}^{(s)} (x_1, x_2) = \frac{1}{4} \int_0^1 J_s(\sqrt{tx_1}) J_s(\sqrt{tx_2}) dt . \end{align*}
When $s \le -1$, the ergodic decomposition of the infinite Pickrell measure $\mu^{(s)}$ was described in \cite{Bufetov_inf-deter}, the decomposition formula takes the same form as \eqref{erg-dec}, while this time, the decomposition measure $\overline{\mu}^{(s)}$ is an infinite measure on $\Omega_P$ and again, we have $\overline{\mu}^{(s)} (\Omega_P \setminus \Omega_P^0) = 0$. The $\overline{\mu}^{(s)}$-almost sure bijection $\omega \rightarrow \text{conf}(\omega)$ identifies $\overline{\mu}^{(s)}$ with an infinite determinantal measure $\mathbb{B}^{(s)}$ on $\text{Conf} ((0, \infty))$. One suitable way to describe $\mathbb{B}^{(s)}$ is via the multiplicative functionals, for which we recall the definition: a multiplicative functional on $\text{Conf}((0, \infty))$ is obtained by taking the product of the values of a fixed nonnegative function over all particles of a configuration: \begin{align*} \Psi_g (X) = \prod_{x \in X} g(x) \text{ \, for any $X \in \text{Conf}((0, \infty))$} . \end{align*} If the function $g: (0, \infty) \rightarrow (0, 1)$ is suitably chosen, then \begin{align} \label{deter-proba} \frac{ \Psi_g \mathbb{B}^{(s)} }{\int_{\textnormal{Conf} ( (0, \infty) )} \Psi_g d \mathbb{B}^{(s)}} \end{align} is a determinantal measure on $\text{Conf}((0, \infty)) $ whose correlation kernel coincides with that of an orthogonal projection $\Pi^g: L^2(0, \infty) \rightarrow L^2(0, \infty)$. Note that the range $\text{Ran} (\Pi^g)$ of this projection is explicitly given in \cite{Bufetov_inf-deter}.
However, even for simple $g$, the explicit formula for the kernel of $\Pi^g$ turns out to be non-trivial. Our aim in this paper is to give explicit formulae for the kernel of the operator $\Pi^g$ for suitable $g$. The kernels are obtained as the scaling limits of the Christoffel-Darboux kernels associated to Christoffel-Uvarov deformation of Jacobi orthogonal polynomial ensembles.
\subsubsection{Formulation of the main result}
Let $f_1, \cdots, f_n$ be complex-valued functions on an interval admitting $n-1$ derivatives. We write $W(f_1, \dots, f_n)$ for the Wronskian of $f_1, \dots, f_n$, which, we recall, is defined by the formula $$W(f_1, \cdots, f_n) (t) = \left| \begin{array}{cccc} f_1(t) & f_1'(t) & \cdots & f_1^{(n-1)}(t) \\ f_2(t) & f_2'(t) & \cdots & f_2^{(n-1)}(t) \\ \vdots & \vdots & \ddots & \vdots \\ f_n(t) & f_n'(t) & \cdots & f_n^{(n-1)}(t) \end{array} \right|. $$
For $s^{\prime}>-1$, we write $$J_{s^{\prime}, y} (t) \stackrel{\mathrm{def}}{=} J_{s^{\prime}} (t \sqrt{y}), \quad K_{s^{\prime}, v_j} (t) \stackrel{\mathrm{def}}{=} K_{s^{\prime}} (t \sqrt{v_j}),$$ where
$ J_{s^{\prime}}$ stands for the Bessel function, $K_{s^{\prime}}$ for the modified Bessel function of the second kind.
The main result of this paper is given by the following
\begin{thm}\label{main-result}
Let $s \le -1$ and let $m$ be any natural number such that $ s + m > -1$. Assume that $v_1, \dots, v_m$ are distinct positive real numbers. Then for the function \begin{align}\label{main-g} g (x) = \prod_{j = 1}^m \frac{4/x}{4/x + v_j} = \prod_{j= 1}^m \frac{4}{4 + v_j x}, \end{align} the kernel $\Pi^g$ is given by the formula \begin{align*} \Pi^g(x, x') = \frac{1}{2} \cdot \frac{\left| \begin{array}{cc} A^{(s + m, v)} (1, 4/x) & B^{(s + m, v)} (1, 4/ x) \vspace{3mm}\\ A^{(s + m, v)} (1, 4/x') & B^{(s + m, v)} (1, 4/ x') \end{array} \right|}{ \prod_{j = 1}^m \sqrt{(v_j + 4/x) (v_j + 4/x')} \cdot [C^{(s+m, v )}(1)]^2 \cdot (x' - x)}, \end{align*} where \begin{align*} A^{(s+m , v)} (t, y) = W ( K_{s+m, v_1}, \dots, K_{s+m, v_m}, J_{s+m, y}) (t) , \end{align*} \begin{align*} B^{(s+m, v)} (t, y) = \frac{\partial A^{(s+m, v)}}{\partial t} (t, y) , \end{align*} \begin{align*} C^{(s+m, v)} (t) = W ( K_{s+m, v_1}, \dots, K_{s+m, v_m}) (t).\end{align*}
\end{thm}
\begin{rem}
When $s > -1$, the above theorem still holds for any $m \ge 1$. In this case, for the same $g$ as given in \eqref{main-g}, by results of \cite{Bufetov_inf-deter}, the kernel $\Pi^g$ obtained above is the kernel for the operator of othogonal projection from $L_2(\R_{+}, \text{Leb})$ onto the subspace $ \sqrt{ g} \mathrm{Ran} J^{(s)}$ (here, with a slight abuse of notation, we let $J^{(s)}$ be the operator of orthogonal projection with kernel given in \eqref{bessel-kernel-mod}). Even in this case, however, the only way we can derive the explicit formula, given above, for the kernel $\Pi^g$ is by using the method of scaling limits.
\end{rem}
\subsection{Organization of the paper}
The remainder of the paper is organized as follows.
Section 2 is devoted to some preliminary Mehler-Heine type asymptotics for Jacobi polynomials,
these asymptotics will be used in the explicit calculations of the scaling limits in section 4.
In Section 3, we show that, for three kinds of auxiliary functions $g$, the scaling limits of the Christoffel-Darboux kernels for the Christoffel-Uvarov deformations of Jacobi orthogonal polynomial ensembles coincide with the kernels $\Pi^g$ which generate the determinantal probability given in \eqref{deter-proba}.
In section 4, we continue the study of the three kinds of auxiliary functions $g$. In case I, we illustrate how we calculate the scaling limits, the obtained scaling limits are the kernels for the determinantal process which are deformations of the Bessel point process of Tracy and Widom. The main formulae in Theorem \ref{main-result} will follow from the formulae obtained in case II, given in Theorem \ref{thm-case2-1} after change of variables $ z \rightarrow 4/x$.
{\bf {Acknowledgements.}}
Grigori Olshanski posed the problem to us, and we are greatly indebted to him.
We are deeply grateful to Alexei M. Borodin, who suggested to use the Christoffel-Uvarov deformations
of Jacobi orthogonal polynomial ensembles. We are deeply grateful to Alexei Klimenko for useful discussions.
The authors are supported by A*MIDEX project (No. ANR-11-IDEX-0001-02), financed by Programme ``Investissements d'Avenir'' of the Government of
the French Republic managed by the French National Research Agency (ANR).
A. B. is also supported in part
by the Grant MD-2859.2014.1 of the President of the Russian Federation,
by the Programme ``Dynamical systems and mathematical control theory''
of the Presidium of the Russian Academy of Sciences, by the ANR
under the project "VALET'' of the Programme JCJC SIMI 1,
and by the
RFBR grants 11-01-00654, 12-01-31284, 12-01-33020, 13-01-12449 .
Y. Q. is supported in part by the ANR grant 2011-BS01-00801.
\section{Preliminary asymptotic formulae.}
\subsection{Notation} If $A, B$ are two quantities depending on the same variables, we write $A\asymp B$ if there exist two absolute values $c_1, c_2 > 0$ such that $c_1 \le \left| \frac{A}{B}\right| \le c_2$. When $A$ and $B$ positive quantities, we write $A\lesssim B$, if there exists an absolute value $c> 0$ such that $ A \le c B$.
For $\alpha, \beta> -1$, we denote the Jacobi weight on $(-1, 1)$ by $$w_{\alpha, \beta}(t) = (1- t)^\alpha ( 1 + t)^\beta.$$ The associated Jacobi polynomials are denoted by $P_n^{(\alpha, \beta)}$. The leading coefficient of $P_n^{(\alpha, \beta)}$ is denoted by $k_n^{(\alpha, \beta)}$ and $h_n^{(\alpha, \beta)} : =\int [P_n^{(\alpha, \beta)} (t) ]^2 w_{\alpha, \beta}(t) dt $. When $\alpha = s, \beta = 0$, we will always omit $\beta$ in the notation: so $w_{s, 0}$ will be denoted by $w_s$, $P_n^{(s, 0)}$ will be denoted by $P_n^{(s)}$ and the quantity $\Delta_{Q, n}^{(s, 0; \ell)}$ defined in the sequel will be denoted by $\Delta_{Q, n}^{(s ; \ell)}$, etc.
Given a sequence $(f^{(\alpha, \beta)}_n)_{n = 0}^\infty$ of functions depending on $\alpha, \beta$, we define the differences of the sequence by $$ \Delta_{f, n}^{(\alpha, \beta; \,0)} : = f_n^{(\alpha, \beta)}, \quad \text{ and for } \ell \ge 0, \Delta_{f, n}^{(\alpha, \beta; \,\ell + 1) } : = \Delta_{f, n+1}^{(\alpha, \beta; \,\ell) } - \Delta_{f, n}^{(\alpha, \beta; \,\ell)} . $$ By convention, we set $\Delta_{f, n}^{(\alpha, \beta; \, -1)} \equiv 0.$
In what follows, $\kappa_n$ always stands for a sequence of natural numbers such that $$\lim_{n \to \infty} \frac{\kappa_n}{n} = \kappa > 0. $$ Typical such sequences are given by $\kappa_n = \lfloor \kappa n \rfloor. $
\subsection{Asymptotics for Higher Differences of Jacobi Polynomials.}
In this section, we establish some asymptotic formulae for higher differences of Jacobi polynomials $\Delta_{P, n}^{(\alpha, \beta; \,\ell)} .$
\begin{lem}
For $\ell \ge 0$ and $n \ge 1$, we have
\begin{align}\label{recursion1} \begin{split} (n+1) \Delta_{P, n}^{( \alpha, \beta; \,\ell+1)} (x) + \ell \Delta_{P, n+1}^{(\alpha, \beta; \,\ell)} (x) + \ell (1-x) \Delta_{P, n+1}^{( \alpha+1, \beta; \ell-1 )} (x) \\ + (n + \frac{\alpha + \beta}{2} + 1) (1-x) \Delta_{P, n}^{( \alpha + 1, \beta; \ell)} (x)= \alpha \Delta_{P, n}^{ (\alpha, \beta; \,\ell)} (x).
\end{split}
\end{align}
\end{lem}
\begin{proof}
When $\ell = 0$, identity \eqref{recursion1} is reduced to known formula (cf. \cite[4.5.4]{Szego-OP}): \begin{align}\label{recursion-difference} \begin{split} & (n + \frac{\alpha + \beta}{2} + 1)(1-x) P_n^{(\alpha + 1, \beta)} (x) \\ = & (n + 1) (P_n^{(\alpha, \beta)}(x) - P_{n+1}^{(\alpha, \beta)}(x) ) + \alpha P_n^{(\alpha, \beta)} (x).
\end{split} \end{align} Now assume that identity \eqref{recursion1} holds for an integer $\ell$ and for all $n\ge1$. In particular, substituting $n+1$ for $n$, we have \begin{align}\label{n+1} \begin{split} & (n+2) \Delta_{P, n+1}^{( \alpha, \beta; \,\ell+1)} (x) + \ell \Delta_{P, n+2}^{( \alpha, \beta; \,\ell )} (x)+ \ell (1-x) \Delta_{P, n+2}^{( \alpha+1, \beta; \ell-1)} (x) \\ & + (n + \frac{\alpha + \beta}{2} + 2) (1-x) \Delta_{P, n+1}^{( \alpha + 1, \beta; \ell) } (x)= \alpha \Delta_{P, n+1}^{( \alpha, \beta; \,\ell )}(x). \end{split} \end{align} Then (\ref{n+1}) $-$ (\ref{recursion1}) yields that \begin{align*} & (n+1) \Delta_{P, n}^{ (\alpha, \beta; \,\ell+2)} (x) + (\ell+1) \Delta_{P, n+1}^{( \alpha, \beta; \,\ell+1)} (x) + (\ell +1)(1-x) \Delta_{P, n+1}^{( \alpha+1, \beta; \ell)} (x) \\ & + (n + \frac{\alpha + \beta}{2} + 1) (1-x) \Delta_{P, n}^{( \alpha + 1, \beta; \ell + 1)} (x)= \alpha \Delta_{P, n}^{ (\alpha, \beta; \,\ell+1)} (x). \end{align*}
Thus \eqref{recursion1} holds for $\ell + 1$ and all $n \ge 1$. By induction, identity \eqref{recursion1} holds for all $\ell \ge 0$ and all $n \ge 1$.
\end{proof}
The classical Mehler-Heine theorem (\cite[p.192]{Szego-OP}) says that for $z \in \C\setminus \{ 0\}$, \begin{eqnarray} \label{MH}\lim_{n \to \infty} n^{-\alpha} P_n^{(\alpha, \beta)} \Big(1- \frac{z}{2 n^2} \Big) = 2^{\alpha} z^{-\frac{\alpha}{2}} J_\alpha (\sqrt{z}).\end{eqnarray} This formula holds uniformly for $z$ in a simply connected compact subset of $ \C\setminus \{0\}$.
Applying the above asymptotics, we have
\begin{prop}\label{jacobi-asymp}
In the regime $ x^{(n)} = 1 - \frac{z}{2 n^2},$ for $\ell \ge 0$, we have
\begin{eqnarray}\label{asymp-gen}\lim_{n \to \infty} n^{\ell-\alpha} \Delta_{P, \kappa_n}^{( \alpha, \beta; \,\ell)} (x^{(n)}) = 2^\alpha z^{\frac{\ell-\alpha}{2}} J_\alpha^{(\ell)} (\kappa \sqrt{z}).\end{eqnarray} The formula holds uniformly in $\kappa$ and $z$ as long as $\kappa$ ranges in a compact subset of $(0, \infty)$ and $z$ ranges in a compact simply connected subset of $\C\setminus\{0\}$.
\end{prop}
\begin{proof}
When $\ell = 0$, identity (\ref{asymp-gen}) is readily reduced to the Mehler-Heine asymptotic formula \eqref{MH} and the uniform convergence. Now assume identity \eqref{asymp-gen} holds for $0, 1, \cdots, \ell$, then by \eqref{recursion1}, we have \begin{flalign} \label{induction} & \lim_{n \to \infty} n^{\ell + 1 - \alpha} \Delta_{P, k_n}^{ (\alpha, \beta; \,\ell+1)} (x^{(n)}) \\ = & - \frac{\ell}{\kappa} \cdot 2^{\alpha} z^{\frac{\ell - \alpha}{2}}J_\alpha^{(\ell)} (\kappa \sqrt{z}) - \frac{\ell}{\kappa} \cdot \frac{z}{2} 2^{\alpha + 1} z^{\frac{\ell-1 - (\alpha+1)}{2}} J_{\alpha + 1}^{(\ell-1)} (\kappa \sqrt{z}) \nonumber \\ & - \frac{z}{2} 2^{\alpha + 1} z^{\frac{\ell- (\alpha +1)}{2}} J_{\alpha + 1} ^{(\ell)} ( \kappa \sqrt{z}) + \frac{\alpha}{\kappa} 2^\alpha z^{\frac{\ell- \alpha}{2}} J_\alpha^{(\ell)}(\kappa \sqrt{z}) \nonumber \\ = & 2^\alpha z^{\frac{\ell + 1 - \alpha}{2}} \Big[ - \ell \cdot \frac{J_\alpha^{(\ell)}(\kappa \sqrt{z})}{ \kappa \sqrt{z}} - \ell \cdot \frac{J_{\alpha+1}^{(\ell - 1)}(\kappa \sqrt{z})}{ \kappa \sqrt{z}} - J_{\alpha + 1}^{(\ell)} (\kappa \sqrt{z}) + \alpha \frac{J_\alpha^{(\ell)}(\kappa\sqrt{z})}{\kappa \sqrt{z}}\Big]. \nonumber\end{flalign}
From the known recurrence relation (cf. \cite[9.1.27]{Ab})\begin{align} \label{differential-relation-J} J_\alpha'(z) = - J_{\alpha + 1} (z) + \frac{\alpha}{z} J_\alpha(z),\end{align} by induction on $\ell$, one readily sees that, for all $\ell \ge 1$, \begin{eqnarray} \label{bessel-der} z \Big[ J_\alpha^{(\ell + 1)} (z) + J_{\alpha + 1}^{(\ell )}(z)\Big] = (\alpha - \ell ) J_\alpha^{(\ell)} (z) - \ell J_{\alpha+1}^{(\ell-1)}(z). \end{eqnarray} Identity \eqref{asymp-gen} for $\ell + 1$ follows from \eqref{induction} and \eqref{bessel-der}, thus the proposition is proved by induction on $\ell$.
\end{proof}
We will also need the asymptotics for the derivative of the differences of Jacobi polynomials. The derivative of the Jacobi polynomials can be expressed in Jacobi polynomials with different parameters, more precisely, we have \begin{align}\label{jacobi-der} \dot{P}_n^{(\alpha, \beta)} (t) = \frac{d}{dt}\Big\{P_n^{(\alpha, \beta)} \Big\} (t) = \frac{1}{2} ( n + \alpha + \beta +1) P_{n - 1}^{(\alpha + 1, \beta +1)} (t).\end{align}Using this relation, we have
\begin{prop}\label{der-asymp}
In the regime $ x^{(n)} = 1 - \frac{z}{2 n^2},$ for $\ell \ge 0$, we have \begin{align*} \lim_{n \to \infty} n^{-2 + \ell - \alpha} \dot{\Delta}_{P,\kappa_n}^{ (\alpha, \beta; \,\ell)} (x^{(n)}) = 2^\alpha z^{\frac{-2 + \ell -\alpha}{2}} \widetilde{J}_{\alpha+1}^{(\ell)}( \kappa \sqrt{z}),\end{align*} where $\widetilde{J}_{\alpha + 1}(t) : = t J_{\alpha +1}(t)$. The formula holds uniformly in $\kappa$ and $z$ as long as $\kappa$ ranges in a compact subset of $(0, \infty)$ and $z$ ranges in a compact simply connected subset of $\C\setminus\{0\}$.
\end{prop}
\begin{proof}
The relation \eqref{jacobi-der} can be written as $$ 2 \dot{\Delta}_{P, n}^{ (\alpha, \beta; \,0)} = (n + \alpha + \beta + 1) \Delta_{P, n - 1}^{ (\alpha + 1, \beta + 1; 0)}.$$ From this formula, it is readily to deduce by induction that for all $\ell \ge 0$, \begin{align}\label{rel-der} 2 \dot{\Delta}_{P, n}^{( \alpha, \beta; \,\ell)} = (n + \alpha + \beta + 1) \Delta_{P, n- 1}^{( \alpha+ 1, \beta + 1; \ell)} + \ell \cdot \Delta_{P, n}^{( \alpha + 1, \beta + 1; \ell-1)}. \end{align} In view of Proposition \ref{jacobi-asymp} and identity (\ref{rel-der}), we have \begin{align*} & \lim_{n \to \infty} n^{-2 + \ell - \alpha} \dot{\Delta}_{P,n}^{ (\alpha, \beta; \,\ell)} (x^{(n)}) \\ = & 2^\alpha z^{\frac{- 2 + \ell - \alpha}{2}} \Big[ \kappa \sqrt{z} J_{\alpha + 1}^{(\ell)} ( \kappa \sqrt{z}) + \ell J_{\alpha + 1}^{(\ell-1)}( \kappa \sqrt{z})\Big] \\ = & 2^\alpha z^{\frac{-2 + \ell -\alpha}{2}} \widetilde{J}_{\alpha+1}^{(\ell)}( \kappa \sqrt{z}). \end{align*} The last equality follows from Leibniz formula $$\Big(t J_{\alpha+1}(t)\Big)^{(\ell)} = t J_{\alpha + 1}^{(\ell)} (t) + \ell J_{\alpha + 1}^{(\ell - 1)} (t) .$$
\end{proof}
\subsection{Asymptotics for Higher Differences of Jacobi's Functions of the Second Kind.}
Let $Q_n^{(\alpha, \beta)} $ be the Jacobi's functions of second kind defined as follows. For $ x \in \C \setminus [-1, 1]$, $$ Q_n^{(\alpha, \beta)} (x) : = \frac{1}{2} (x - 1)^{- \alpha} (x + 1)^{-\beta} \int_{-1}^1 (1-t)^{\alpha} (1 + t )^{\beta} \frac{P_n^{(\alpha, \beta)} (t)}{x - t} dt. $$
\begin{prop}\label{asymp-Q}
Let $s > -1$ and $r_n = \frac{w}{2n^2}$ with $ w > 0$. Then
$$\lim_{n \to \infty} n^{-s} Q_{\kappa_n}^{(s)} ( 1 + r_n ) = 2^s w^{- \frac{s}{2}} K_s( \kappa \sqrt{w}),$$ where $K_s$ is the modified Bessel function of second kind with order $s$. For any $\varepsilon > 0$, the convergence is uniform as long as $\kappa \in [\varepsilon, 1]$ and $w$ ranges in a bounded simply connected subset of $\C\setminus \{0\}$.
\end{prop}
\begin{proof}
We show the proposition when $\kappa_n = n$, the general case is similar.
Define $t_n$ by the formula $$ 1 + r_n = \frac{1}{2} \Big( t_n + \frac{1}{t_n} \Big), \quad | t_n | < 1.$$ By definition, we have $$\lim_{n \to \infty} n ( 1 - t_n) = \sqrt{w}. $$ We now use the integral representation for the Jacobi function of the second kind (cf. \cite[4.82.4]{Szego-OP}). Write \begin{align*} Q_n^{(s)} ( 1 + r_n) = \frac{1}{2} \Big( \frac{4t_n}{ 1 - t_n}\Big)^s & \int_{-\infty}^{\infty} \Big( ( 1+ t_n ) e^{\tau} + 1 - t_n \Big)^{-s} \times \\ & \times \Big( 1 + r_n + (2r_n + r_n^2)^{\frac{1}{2}} \cosh \tau \Big)^{- n - 1} d\tau .\end{align*} Taking $ n \to \infty$ and using the integral representation for the modified Bessel function(cf. \cite[9.6.24]{Ab}), we see that \begin{align*} & \lim_{n \to \infty} n^{-s} Q_n^{(s)} ( 1 + r_n) = 2^{s - 1} w^{-\frac{s}{2}} \int_{-\infty}^{\infty} e^{-s \tau - \sqrt{w} \cosh \tau} d \tau \\ & = 2^{s - 1} w^{-\frac{s}{2}} \int_{-\infty}^{\infty} e^{- \sqrt{w} \cosh \tau} \cosh (s\tau) d \tau \\ & = 2^{s } w^{-\frac{s}{2}} \int_{0}^{\infty} e^{- \sqrt{w} \cosh \tau} \cosh (s\tau) d \tau = 2^s w^{- \frac{s}{2}} K_s( \sqrt{w}). \end{align*}
\end{proof}
\begin{prop}\label{asymp-diff-Q}
In the same condition as in Proposition \ref{asymp-Q}, we have for all $ \ell \ge 0$, \begin{eqnarray}\label{asymp-Q-diff} \lim_{n \to \infty} n^{\ell - s} \Delta_{Q, \kappa_n}^{(s; \, \ell)} ( 1 + r_n ) = 2^s w^{\frac{\ell-s}{2}}K_s^{(\ell)} (\kappa \sqrt{w}), \end{eqnarray} where $K_s^{(\ell)}$ is the $\ell$-th derivative of the modified Bessel function of second kind $K_s$. Moreover, For any $\varepsilon > 0$, the convergence is uniform as long as $\kappa \in [\varepsilon, 1]$ and $w$ ranges in a bounded simply connected subset of $\C\setminus \{0\}$.
\end{prop}
\begin{proof}
It suffices to prove the proposition in the case $ \kappa_n = n$. The general case can easily be deduced from this special case by using the uniform convergence.
From the identity \eqref{recursion-difference} we obtain $$ (n+1) \Delta_{Q, n}^{(s; \, 1)} (x) + (n + \frac{s}{2} + 1) (x-1)\Delta_{Q, n}^{(s + 1; \, 0)} (x) = s \Delta_{Q,n}^{(s;\, 0)} (x). $$ By induction, it is readily to write \begin{eqnarray} \label{rec-sec-diff} & & (n+1) \Delta_{Q, n}^{(s; \, \ell+1)} (x) + \ell \Delta_{Q, n+1}^{(s; \, \ell)} (x) + \ell (x-1) \Delta_{Q, n + 1}^{(s+1; \, \ell-1)} (x) \\ & & + ( n + \frac{s}{2} + 1) (x-1)\Delta_{Q, n}^{(s +1; \, \ell)} (x) = s \Delta_{Q, n}^{(s; \, \ell)} (x) ,\nonumber\end{eqnarray} for all $\ell \ge 0$ where by convention, we set $\Delta_{Q, n}^{(s; \, -1)}: = 0$ .
Using the formula (\cite[9.6.26]{Ab}) \begin{align}\label{differential-relation-K} K_s'(t) = - K_{s+1}(t) + \frac{s}{t} K_s(t), \end{align} we can show that for $\ell \ge 1$, \begin{eqnarray}\label{derivative-sec-bessel} t \Big[ K_s^{(\ell+1)} (t) + K_{s+1}^{(\ell)} (t) \Big] = (s-\ell) K_s^{(\ell)}(t) - \ell K_{s+1}^{(\ell-1)} (t). \end{eqnarray}
Proposition \ref{asymp-Q} says that the equation \eqref{asymp-Q-diff} holds for $\ell = 0$. Now assume \eqref{asymp-Q-diff} holds for $0, 1, \cdots, \ell$. By (\ref{rec-sec-diff}), we have \begin{eqnarray*} & & \lim_{n \to \infty} n^{\ell+ 1 - s} \Delta_{Q, n}^{(s; \, \ell+1)} ( 1 + r_j^{(n)} ) \\ & =& - \ell \cdot 2^s w_j^{\frac{\ell - s}{2}} K_s^{(\ell)}( \sqrt{w_j}) - \ell \cdot \frac{w_j}{2} 2^{s+1} w_j^{\frac{\ell - s -2}{2}} K_{s+1}^{(\ell - 1)} ( \sqrt{w_j}) \\ & & - \frac{w_j}{2} \cdot 2^{s+1} w_j^{\frac{\ell - s - 1}{2}} K_{s+1}^{(\ell)}( \sqrt{w_j}) + s \cdot 2^s w_j^{\frac{\ell-s }{2}} K_s^{(\ell)}(\sqrt{w_j}) \\ & = & 2^s w_j^{\frac{\ell-s}{2} } \Big[ (s - \ell ) K_s^{(\ell)}(\sqrt{w_j}) - \ell K_{s + 1}^{(\ell - 1)} (\sqrt{w_j}) - \sqrt{w_j} K_{s + 1}^{(\ell)} (\sqrt{w_j}) \Big] \\ & = & 2^s w_j^{\frac{\ell +1-s}{2}} K_s^{(\ell+1)}(\sqrt{w_j}). \end{eqnarray*} This completes the proof.
\end{proof}
\subsection{Asymptotics for Higher Differences of $R_n^{(\alpha, \beta)}$.}
\begin{defn}
Define for $ x \in \C \setminus [-1, 1]$, $$R_n^{(\alpha, \beta)}(x) : = (x - 1)^{-\alpha} (x + 1)^{- \beta } \int_{-1}^1 \frac{P_n^{(\alpha, \beta)} (t) }{(x - t)^2 } ( 1 - t)^{\alpha} ( 1 + t)^{\beta} d t. $$
\end{defn}
\begin{defn}
For any $s \in \R$, define $$L_s(x) : = s K_{s } (x) - \frac{x K_{s-1}(x) + x K_{s + 1} (x) }{2}. $$
\end{defn}
\begin{prop}
Let $ s> -1$, and $\gamma^{(n)} = \frac{u}{2n^2}$ with $ u > 0$. Then we have $$\lim_{n \to \infty} n^{-2- s}R_{\kappa_n}^{(s)} ( 1 + \gamma^{(n)} ) = 2^{\frac{2s + 3}{2}} \cdot u^{-\frac{s + 2}{2}} L_{s}(\kappa \sqrt{u}).$$ Moreover, for any $\varepsilon > 0$, the convergence is uniform as long as $ \kappa \in [\varepsilon, 1 ]$ and $ u $ ranges in a compact subset of $(0, \infty)$.
\end{prop}
\begin{proof}
The uniform convergence can be derived by a careful look at the following proof. By this uniform convergence, it suffices to show the proposition for $\kappa_n = n $.
Define $z$ by the formula $$ x = \frac{1}{2} \Big( z + \frac{1}{z} \Big), \quad | z | < 1.$$ By the integral representation for the Jacobi function of the second kind (\cite[4.82.4]{Szego-OP}), we have \begin{align*} Q_n^{(s)} ( x ) = \frac{1}{2} \Big( \frac{4z}{ 1 - z}\Big)^s & \int_{-\infty}^{\infty} \Big( ( 1+ z ) e^{\tau} + 1 -z \Big)^{-s} \times \\ & \times \Big( x + (x^2 - 1)^{\frac{1}{2}} \cosh \tau \Big)^{- n - 1} d\tau .\end{align*} Denote $$ \widehat{Q}^{(s)}_n(x) : = 2 ( x-1)^s Q_n^{(s)} (x) = \int_{-1}^1 \frac{P_n^{(s)} (t) }{ x - t } ( 1 - t)^s dt .$$ Then \begin{align}\label{QR}\left[\frac{d}{dx} \widehat{Q}_n^{(s)} \right] (x) = - \int_{-1}^1 \frac{P_n^{(s)}(t) }{(x - t)^2} ( 1- t)^s d t = - ( x - 1)^s R_n^{(s)} ( x) .\end{align} We have \begin{align*} \widehat{Q}^{(s)}_n(x) & = 2^s \int_{-\infty}^{\infty} \left( 1 + \sqrt{\frac{x + 1}{x - 1}} e^{\tau} \right)^{-s} \Big( x + (x^2 - 1)^{\frac{1}{2}} \cosh \tau \Big)^{- n - 1} d\tau.\end{align*}
Hence $$ \left[\frac{d}{dx} \widehat{Q}^{(s)}_n\right] (x) = T_1^{(n)}(x) - T_{2}^{(n)} (x) , $$ where \begin{align*} T_1^{(n)} (x) = \frac{s \cdot 2^s }{(x-1)^2} \sqrt{\frac{x - 1}{ x + 1}} \int_{-\infty}^{\infty} e^{\tau} & \left( 1 + \sqrt{\frac{x + 1}{x - 1}} e^{\tau} \right)^{-s-1} \times \\ & \times \Big( x + (x^2 - 1)^{\frac{1}{2}} \cosh \tau \Big)^{- n - 1} d\tau \end{align*} and \begin{align*} T_2^{(n)} (x) = (n+1) 2^s \int_{-\infty}^{\infty} & \left( 1 + \sqrt{\frac{x + 1}{x - 1}} e^{\tau} \right)^{-s} \Big( x + (x^2 - 1)^{\frac{1}{2}} \cosh \tau \Big)^{- n - 2} \times \\ & \times ( 1 + \frac{x}{\sqrt{ x^2 - 1}} \cosh \tau ) d\tau .\end{align*} We have \begin{align*} \lim_{n \to \infty} n^{s -2} T_1^{(n)}( 1 + \gamma^{(n)})& =\sqrt{2} s \cdot u^{\frac{s-2}{2}}\int_{-\infty}^\infty e^{-s \tau - \sqrt{u}\cosh \tau} d \tau \\ & = 2 \sqrt{2} s \cdot u^{\frac{s-2}{2} } K_s (\sqrt{u}). \end{align*} \begin{align*} \lim_{n \to \infty} n^{s -2} T_2^{(n)} (1 + \gamma^{(n)} ) & = \sqrt{2} u^{ \frac{s - 1}{2}} \int_{-\infty}^\infty e^{-s \tau} e^{- \sqrt{u} \cosh \tau} \cosh \tau d \tau \\ & = \sqrt{2 } u^{ \frac{s - 1}{2}} \left(K_{s + 1}(\sqrt{u}) + K_{s-1}(\sqrt{u})\right). \end{align*} Hence \begin{align*} & \lim_{n \to \infty} n^{s -2} \left[ \frac{d}{d x} \widehat{Q}_n^{(s)} \right] ( 1 + \gamma^{(n)}) \\ = & 2 \sqrt{2} u^{\frac{s - 2}{2}} \left( s K_s(\sqrt{u}) - \frac{ \sqrt{u}K_{s + 1} (\sqrt{u}) + \sqrt{u}K_{s-1}(\sqrt{u}) }{2}\right) \\ = & 2 \sqrt{2} u^{\frac{s-2}{2}} L_{s} (\sqrt{u}). \end{align*} In view of \eqref{QR}, we prove the desired result.
\end{proof}
{\flushleft \bf Remark. } We have the following relations \begin{align} \label{L-relation} L_s'(x) = - L_{s + 1}(x) + \frac{s }{x } L_s(x), \end{align} \begin{align} \label{L-der} x \Big[ L_s^{(\ell + 1)} (x) + L_{s + 1}^{(\ell )}(x)\Big] = (s - \ell ) L_s^{(\ell)} (x) - \ell L_{s+1}^{(\ell-1)}(x). \end{align} Let us for example show \eqref{L-relation}. The validity of \eqref{L-der} can be verified easily by induction on $\ell$. We have \begin{align*} L_s'(x) = & s K_s'(x) - \frac{x K_{s - 1}' (x) + K_{s-1}(x) + x K_{s +1}'(x) + K_{s +1} (x) }{2} \\ = & -s K_{s + 1} (x) + \frac{s^2}{x} K_s(x) - \frac{- x K_s(x) + (s - 1) K_{s - 1}(x)}{2} \\ & -\frac{ -x K_{s +2} (x) + (s+ 1) K_{s + 1}(x) }{2} \\ = & - \left( (s + 1) K_{s + 1}(x) - \frac{x K_s(x) + x K_{ s+2} (x) }{2}\right) \\ & + \frac{s}{x} \left( s K_{s } (x) - \frac{x K_{s-1}(x) + x K_{s + 1} (x) }{2} \right) \\ = & - L_{s + 1}(x) + \frac{s }{x} L_s(x) . \end{align*}
\begin{prop}
Let $ s> -1$, and $\gamma^{(n)} = \frac{u}{2n^2}$ with $ u > 0$. Then for $\ell \ge 0$, we have $$\lim_{ n \to \infty} n^{\ell - s-2} \Delta_{R, \kappa_n}^{(s; \, \ell)} ( 1 + \gamma^{(n)}) = 2^{\frac{2s + 3}{2}} \cdot u^{\frac{ \ell - s-2 }{2}} L_{s}^{(\ell)}(\kappa \sqrt{u}) . $$ Moreover, for any $\varepsilon > 0$, the convergence is uniform as long as $ \kappa \in [\varepsilon, 1 ]$ and $ u $ ranges in a compact subset of $(0, \infty)$.
\end{prop}
\begin{proof}
Again, we show the proposition only for $\kappa_n = n$. The formula holds for $\ell = 0$. Assume that the formula holds for all $0 , 1, \cdots, \ell$, we shall show that it holds for $\ell + 1$. By similar arguments as that for $ \Delta_{Q, n}^{(s; \, \ell)}$, we can easily obtain that \begin{align*} & (n+1) \Delta_{R, n}^{(s; \, \ell+1)} (x) + \ell \Delta_{R, n+1}^{(s; \, \ell)} (x) + \ell (x-1) \Delta_{R, n + 1}^{(s+1; \, \ell-1)} (x) \\ & + ( n + \frac{s}{2} + 1) (x-1)\Delta_{R, n}^{(s +1; \, \ell)} (x) = s \Delta_{R, n}^{(s; \, \ell)} (x) . \end{align*} Hence \begin{align*} & \lim_{n \to \infty} n^{(\ell +1) - s-2} \Delta_{R, n}^{(s; \, \ell + 1)} ( 1 + \gamma^{(n)}) \\ = & (s- \ell) 2^{\frac{2s + 3}{2}} \cdot u^{\frac{ \ell - s - 2}{2}} L_{s}^{(\ell)}( \sqrt{u}) - \ell \frac{u }{2} \cdot 2^{\frac{2s + 5}{2}} \cdot u^{\frac{ \ell - s - 4}{2}} L_{s + 1}^{(\ell-1)}( \sqrt{u}) \\ & - \frac{u }{2} \cdot 2^{\frac{2s + 5}{2}} \cdot u^{\frac{ \ell - s - 3}{2}} L_{s+1}^{(\ell)}( \sqrt{u}) \\ = & 2^{\frac{2s + 3}{2}} \cdot u^{\frac{ \ell - s -1 }{2}} \left( \frac{s - \ell}{\sqrt{u}} L_s^{(\ell)} (\sqrt{u}) - \frac{\ell}{\sqrt{u}} L_{ s + 1}^{(\ell-1)} (\sqrt{u}) - L_{s+1}^{(\ell)} (\sqrt{u}) \right) \\ = & 2^{\frac{2s + 3}{2}} \cdot u^{\frac{(\ell +1) - s -2}{2}} L_{s + 1}^{(\ell + 1)} (\sqrt{u}).
\end{align*}
\end{proof}
\section{Bessel Point Processes as Radial Parts of Pickrell Measures on Infinite Matrices}
\subsubsection{Radial parts of Pickrell measures and the infinite Bessel point processes}
Following Pickrell, we introduce a map $$\mathfrak{rad}_n: \text{Mat}(n, \C) \rightarrow \R_{+}^n $$ by the formula $$ \mathfrak{rad}_n(z) = (\lambda_1 (z^*z), \dots, \lambda_n(z^*z)) .$$ Here $ (\lambda_1 (z^*z), \dots, \lambda_n(z^*z))$ is the collection of the eigenvalues of the matrix $z^*z$ arranged in non-decreasing order.
The radial part of the Pickrell measure $\mu_n^{(s)}$ is defined as $(\mathfrak{rad}_n )_{* } \mu_n^{(s)}$. Note that, since finite-dimensional unitary groups are compact, even for $s \le -1$, if $n + s > 0$, then the radial part of $\mu_n^{(s)}$ is well-defined.
Denote $d\lambda$ the Lebesgue measure on $\R_{+}^n$, then the radial part of the measure $\mu_n^{(s)}$ takes the form $$ \text{const}_{n, s} \cdot \prod_{i< j} (\lambda_i - \lambda_j)^2 \cdot \frac{1}{( 1+ \lambda_i)^{2n + s}} d\lambda.$$ After the change of variables $$u_i = \frac{\lambda_i -1}{ \lambda_i + 1}, $$ the radial part $(\mathfrak{rad}_n)_{*} \mu_n^{(s)} = (\mathfrak{rad}_n \circ \pi_n)_{*} \mu^{(s)}$ is a measure defined on $(-1, 1)^n$ by the formula \begin{align} \label{jacobi-measure} \text{const}_{n,s} \cdot \prod_{1\le i < j \le n} (u_i - u_j)^2 \cdot \prod_{ i = 1}^n ( 1- u_i)^s du_i. \end{align}
For $ s > -1$, the constant is chosen such that the measure \eqref{jacobi-measure} is a probability measure, it is the Jacobi orthogonal polynomial ensemble, a determinantal point process induced by the $n$-th Christoffel-Darboux projection operator for Jacobi polynomials. The classical Heine-Mehler asympotitics of Jacobi polynomials imply that these determinantal point processes, when rescaled with the scaling \begin{align}\label{rescal} u_i = 1 - \frac{y_i}{2n^2}, i = 1, \dots, n, \end{align} have as scaling limit the Bessel point process of Tracy and Widom \cite{Tracy-Widom94}, use the same notation as in \cite{Bufetov_inf-deter}, we denote this point process on $ (0, \infty)$ by $\widetilde{\mathbb{B}}^{(s)}$.
For $ s \le -1$, the scaling limit under the scaling regime \eqref{rescal} is an infinite determinantal measure $\widetilde{\mathbb{B}}^{(s)}$ on $\Conf((0, \infty))$.
In both cases, $\widetilde{\mathbb{B}}^{(s)}$ is closely related to the decomposition measure $\mathbb{B}^{(s)}$ for the Pickrell measure $\mu^{(s)}$: the change of variable $y = 4/x$ reduces the decomposition measure $\mathbb{B}^{(s)}$ to $\widetilde{\mathbb{B}}^{(s)}$.
\subsubsection{Christoffel-Uvarov deformations of Jacobi orthogonal polynomial ensembles and the scaling limits}
Now consider a sequence of functions $ g^{(n)}: (-1, 1) \rightarrow (0, 1]$ such that the measures $(1 - u)^s g^{(n)} (u) du $ on $(-1,1)$ have moments of all orders. On the cube $(-1, 1)^n$, the probability measure $$ \text{const}_{n,s} \cdot \prod_{ 1 \le i < j \le n} (u_i - u_j)^2 \prod_{i = 1}^n ( 1-u_i)^s g^{(n)} (u_i) d u_i $$ gives a determinantal point process induced by the corresponding $n$-th Christoffel-Darboux projection. After change of variable \begin{align} \label{rescal2} y = \frac{n^2 x - 1}{n^2 x + 1}, \end{align} this point process becomes \begin{align} \label{multi-change} \mathbb{P}_{\widetilde{g}^{(n)}}^{(s,n)} : = \frac{\Psi_{\widetilde{g}^{(n)}} \mathbb{B}^{(s,n )}}{ \int_{ \text{Conf} \big((0, + \infty)\big)} \Psi_{\widetilde{g}^{(n)}} d \mathbb{B}^{(s,n )} }, \end{align}
where $ \mathbb{B}^{(s,n)}$ is the point process $(\mathfrak{rad}_n \circ \pi_n)_{*}\mu^{(s)}$ after the change of variable given in \eqref{rescal2} and $\widetilde{g}^{(n)}$ is the function on $(0, \infty)$ given by \begin{align} \label{functions-g}\widetilde{g}^{(n)} ( x ) = g^{(n)} ( \frac{n^2 x-1}{n^2 x+ 1}).\end{align}
We shall need the following elementary lemma, whose routine proof is included for completeness.
\begin{lem}\label{DC-lem}
Let $(\Omega, m)$ be a measure space equipped with a $\sigma$-finite measure $m$. Given two sequence of positive integrable functions $(F_n)_{n=1}^\infty$ and $(f_n)_{n = 1}^{\infty}$ satisfying \begin{itemize} \item[(a)] for any $ n \in \N,$ $f_n \le F_n$.
\item[(b)] $\lim_{n \to \infty} f_n = f, a.e. \text{ and } \lim_{n \to \infty} F_n = F, a.e.. $
\item[(c)] $ \lim_{n \to \infty} \int F_n dm = \int F dm < \infty$ .
\end{itemize}
Then $$ \lim_{n \to \infty} \int f_n dm= \int f dm.$$
\end{lem}
\begin{proof}
By Fatou's lemma, we have \begin{align*} \int f d m \le \liminf_{n \to \infty} \int f_n dm. \end{align*} Again by Fatou's lemma applied on the positive sequence $F_n - f_n$, we have \begin{align*} \int ( F - f) dm \le \liminf_{n \to \infty} \int (F_n - f_n) dm = \int F d m - \limsup_{n \to \infty} \int f_n dm. \end{align*} Hence \begin{align*} \limsup_{n \to \infty} \int f_n dm \le \int f d m . \end{align*} Combining these inequalities, we get the desired result.
\end{proof}
The following three kinds of auxiliary functions are considered:
\begin{align} \label{g_I}g_I^{(n)} (u) = \prod_{i = 1}^m \frac{(1 - \frac{w_i}{2n^2} - u)^2}{ (1 - u)^2 } , \, w_i \ne w_j ; \end{align}
\begin{align} \label{g_II} g_{II}^{(n)} (u) = \prod_{i = 1}^m \frac{1 - u }{ 1 + \frac{v_i}{2n^2} - u }, \, v_i \ne v_j ; \end{align}
\begin{align} \label{g_III}g_{III}^{(n)} (u) = \frac{(1-u)^2}{ (1 + \frac{v}{2n^2} -u )^2} .\end{align}
Let $\widetilde{g}_I^{(n)} ( x )$ denote the function given by $\widetilde{g}_I^{(n)} ( x ) = g^{(n)}_I ( \frac{n^2 x-1}{n^2 x+ 1}).$ Similarly, let $ \widetilde{g}_{II}^{(n)} ( x ) = g^{(n)}_{II} ( \frac{n^2 x-1}{n^2 x+ 1})$ and $\widetilde{g}_{III}^{(n)} ( x ) = g^{(n)}_{III} ( \frac{n^2 x-1}{n^2 x+ 1})$.
If $ \widetilde{g}^{(n)} $ is one of the functions $\widetilde{g}_I^{(n)}$, $\widetilde{g}_{II}^{(n)}$ $\widetilde{g}_{III}^{(n)}$ , then there exists a positive function $ g: (0, \infty) \rightarrow [0, 1] $ and a constant $M > 0$ satisfying \begin{itemize}
\item[(a)] $\lim_{n \to \infty}\widetilde{g}^{(n)} (x) = g(x).$
\item[(b)] for any (finite or infinite) sequence of positive real numbers $(x_i)_{i = 1}^N$, we have $$\prod_{ i = 1}^N \widetilde{g}^{(n)} (x_i) \le M \cdot \prod_{i = 1}^N g(x_i). $$
\item[(c)] for any sequence $\{(x_i^{(n)})_{1 \le i \le n }\}_{n=1}^\infty$ satisfying $x_i^{(n)} \ge 0$, $$\lim_{n \to \infty} x_i^{(n)} = x_i \, \text{ and } \, \lim_{n \to \infty} \sum_{i = 1}^n x_i^{(n)} = \sum_{i = 1}^\infty x_i< \infty,$$ we have $$ \lim_{n \to \infty} \prod_{ i = 1}^n \widetilde{g}^{(n)} (x_i^{(n)} ) = \prod_{ i = 1}^{\infty} g(x_i). $$
\end{itemize} The limiting functions are \begin{align}\label{asym-g_I} g_I (x) = \prod_{i = 1}^m (1 - \frac{w_i }{4} x ) ^2 & \sim 1 - \left(\sum_{i } \frac{w_i}{2}\right) x , \text{ as } x \to 0+ ; \\ \label{asym-g_II}g_{II}(x) = \prod_{i = 1}^m \frac{4}{4 + v_i x } & \sim 1 - \left(\sum_i \frac{v_i}{4} \right)x , \text{ as } x \to 0+ ; \\ \label{asym-g_III} g_{III}(x) = \left(\frac{4}{4 + v x }\right)^m & \sim 1 - m \cdot \frac{v}{4} x , \text{ as } x \to 0+ . \end{align}
\begin{prop}\label{abstract-result}
Assume we are in one of the following situations: \begin{itemize} \item [I.] $g^{(n)} = g_I^{(n)}$ and $ g = g_I$ with $ s- 2m > -1$, \item[II.] $ g^{(n)} = g_{II}^{(n)}$ and $g = g_{II}$ with $m + s > -1$, \item[III.] $ g^{(n)} = g_{III}^{(n)}$ and $g= g_{III} $ with $ m + s > -1$. \end{itemize} Then the determinantal probability measure in \eqref{multi-change} converges weakly in $\mathfrak{M}_{\textnormal{fin}} (\textnormal{Conf} ((0, + \infty)))$ to \begin{align} \mathbb{P}_{g}^{(s)} : = \frac{\Psi_{g} \mathbb{B}^{(s )}}{ \int_{ \textnormal{Conf} \left((0, \infty)\right)} \Psi_{g} d \mathbb{B}^{(s )} }.\end{align}
\end{prop}
\begin{proof}
We will use the notation in \cite{Bufetov_inf-deter}, where, following \cite{BO-infinite-matrices}, it is proved that the measure $\mu^{(s)}$ is supported on the subset $\text{Mat}_{\text{reg}} (\N, \C)$ for any $s \in \R$. By the remarks preceding the proposition, for any $ z \in \text{Mat}_{\text{reg}} (\N, \C)$, we have $$\lim_{n \to \infty} \Psi_{\widetilde{g}^{(n)}} ( \mathfrak{r}^{(n)} (z) ) = \Psi_g(\mathfrak{r}^{\infty} (z)), $$ and $$\Psi_{\widetilde{g}^{(n)}} (\mathfrak{r}^{(n)} (z) ) \le M \cdot \Psi_{g} (\mathfrak{r}^{(n)} (z) ). $$ Now take any bounded and continuous function $f$ on $\text{Conf}((0, \infty))$, we have \begin{align*} \int f(X) d \mathbb{P}_{\widetilde{g}^{(n)}}^{(s,n)} (X) = \frac{ \int_{\text{Mat}_{\text{reg}}(\N, \C)} f(\mathfrak{r}^{(n)} (z)) \Psi_{\widetilde{g}^{(n)}} (\mathfrak{r}^{(n)} (z) ) d \mu^{(s)} (z)}{\int_{\text{Mat}_{\text{reg}}(\N, \C)} \Psi_{\widetilde{g}^{(n)}} (\mathfrak{r}^{(n)} (z) ) d \mu^{(s)} (z) }. \end{align*} By Lemma \ref{DC-lem}, it suffices to show that \begin{align}\label{wanted} \lim_{n \to \infty}\int_{\text{Mat}(\N, \C)} \Psi_{g} (\mathfrak{r}^{(n)} (z) ) d \mu^{(s)} (z) = \int_{\text{Mat}(\N, \C)} \Psi_{g} (\mathfrak{r}^{(\infty)} (z) ) d \mu^{(s)} (z) . \end{align}
If $s> -1$, the measure $\mu^{(s)}$ is a probability measure, by dominated convergence theorem, the equality \eqref{wanted} holds.
If $s \le -1$, the measure $\mu^{(s)}$ is infinite. The radial part of $\mu^{(s)}$ is an infinite determinantal process which corresponds to a finite-rank perturbation of determinantal probability measures as described in \S 5.2 in \cite{Bufetov_inf-deter}. By using the asympotic formulae \eqref{asym-g_I}, \eqref{asym-g_II} and \eqref{asym-g_III} respectively in these three cases, we can check that the conditions of Proposition 3.6 in \cite{Bufetov_inf-deter} are satisfied, for instance, let us check the following condition \begin{align}\label{UI-condition} \lim_{n \to \infty} \tr \sqrt{1 - g} \Pi^{(s, n)} \sqrt{1-g} = \tr \sqrt{1- g} \Pi^{(s)} \sqrt{1-g},\end{align} where $\Pi^{(s, n)} $ is the orthogonal projection onto the subspace $L^{(s + 2 n_s, n - n_s)}$ described in \S 5.2.1 in \cite{Bufetov_inf-deter}. Combining the estimates given in Proposition 5.11 and Proposition 5.13 in \cite{Bufetov_inf-deter}, the integrands appeared in $$\tr \sqrt{1-g} \Pi^{(s,n)} \sqrt{1-g}$$ are uniformly integrable, hence by the Heine-Mehler classical asymptotics, the equality \eqref{UI-condition} indeed holds. Now by Corollary 3.7 in \cite{Bufetov_inf-deter}, we have \begin{align*} \frac{\Psi_g \mathbb{B}^{(s,n)}}{ \int_{\textnormal{Conf}\big( (0, \infty) \big)} \Psi_g d \mathbb{B}^{(s,n)}} \rightarrow \frac{\Psi_g \mathbb{B}^{(s)}}{ \int_{\textnormal{Conf}\big( (0, \infty) \big)} \Psi_g d \mathbb{B}^{(s)}} . \end{align*} It follows that \begin{align}\label{tight} \begin{split} & \lim_{n \to \infty}\frac{ \int_{\text{Mat}(\N, \C)} f(\mathfrak{r}^{(n)} (z)) \Psi_{g} (\mathfrak{r}^{(n)} (z) ) d \mu^{(s)} (z)}{\int_{\text{Mat}(\N, \C)} \Psi_{g} (\mathfrak{r}^{(n)} (z) ) d \mu^{(s)} (z) } \\ = & \frac{ \int_{\text{Mat}(\N, \C)} f(\mathfrak{r}^{(\infty)} (z)) \Psi_{g} (\mathfrak{r}^{(\infty)} (z) ) d \mu^{(s)} (z)}{\int_{\text{Mat}(\N, \C)} \Psi_{g} (\mathfrak{r}^{(\infty)} (z) ) d \mu^{(s)} (z) }. \end{split} \end{align} Moreover, by Lemma 1.14 in \cite{Bufetov_inf-deter}, there exists a positive bounded continuous function $f$ such that \begin{align*} \lim_{n \to \infty} \int_{\text{Mat}(\N, \C)} f(\mathfrak{r}^{(n)} (z)) d \mu^{(s)} (z) = \int_{\text{Mat}(\N, \C)} f(\mathfrak{r}^{(\infty)} (z)) d \mu^{(s)} (z). \end{align*} Again by Lemma \ref{DC-lem}, we have \begin{align}\label{fg} \begin{split} & \lim_{n \to \infty} \int_{\text{Mat}(\N, \C)} f(\mathfrak{r}^{(n)} (z)) \Psi_{g} (\mathfrak{r}^{(n)} (z) ) d \mu^{(s)} (z) \\ = & \int_{\text{Mat}(\N, \C)} f(\mathfrak{r}^{(\infty)} (z)) \Psi_{g} (\mathfrak{r}^{(\infty)} (z) ) d \mu^{(s)} (z).\end{split} \end{align} Finally, \eqref{wanted} follows from \eqref{tight} and \eqref{fg}, as desired.
\end{proof}
\begin{rem}
Note that \begin{align*} \frac{n^2 x - 1}{ n^2 x + 1} = 1 - \frac{4/x}{2n^2 + 2/x} \sim 1 - \frac{4/x}{2n^2}. \end{align*} Thus under change of variable $y = 4/x$, in the sequel, we only consider the scaling regimes of type $$x = 1 - \frac{z}{2n^2}. $$
\end{rem}
\section{Scaling Limits of Christoffel-Uvarov deformations of Jacobi Orthogonal Polynomial Ensembles.}
In this section, we will calculate explicitly the kernels for the determinatal probabilities $\mathbb{P}_g^{(s)}$ given in Proposition \ref{abstract-result}. For avoiding extra notation, we mention here that in the sequel, in case I the $s$ corresponds to $s - 2m$ in Proposition \ref{abstract-result}, in cases II and III, it corresponds to $s + m $ in Propostion \ref{abstract-result}. For the case III, we give the result only for $ m = 2$.
Observe that in the new coordinate $x= \rho (y)$, the kernel $K(x_1, x_2)$ for a locally trace class operator on $L^2(\R_+)$ reduces to $$\sqrt{\rho'(y_1) \rho'(y_2)} K(\rho(y_1), \rho(y_2)).$$
\subsection{Explicit Kernels for Scaling Limit: Case I}
Let $s > -1$. Consider a sequence $\xi^{(n)} = (\xi_1^{(n)}, \cdots, \xi_m^{(n)})$ of $m$-tuples of distinct real numbers in $(-1, 1)$. Let $w_s^{[\xi^{(n)}]}$ be the weight on $(-1, 1)$ given by $$w_s^{[\xi^{(n)}]}(t) = \prod_{i =1}^m (\xi_i^{(n)} - t)^2 \cdot w_s(t) = \prod_{i =1}^m (\xi_i^{(n)} - t)^2 \cdot ( 1 - t)^s .$$
Let $K_n^{[s, \xi^{(n)}]}(x_1, x_2)$ denote the $n$-th Christoffel-Darboux kernel for the weight $w_s^{[\xi^{(n)}]}$. The aim of this section is to establish the scaling limit of $K_n^{[s, \xi^{(n)}]}(x_1, x_2)$ in the following regime: \begin{align}\label{regime-case1} \begin{split} \xi^{(n)}_i = 1 - \frac{w_i}{2 n^2}, 1 \le i \le m, & \text{ $w_i > 0$ are all distinct;} \\ x_i^{(n)} = 1 - \frac{z_i}{2n^2}, & \quad z_i > 0, i = 1, 2. \end{split}
\end{align}
\subsubsection{Explicit formulae for orthogonal polynomials and Christoffel-Darboux kernels. }
Let $ ( \pi_j^{[s, \xi^{(n)}]})_{j \ge 0} $ denote the system of monic orthogonal polynomials associated with the weights $w_s^{[\xi^{(n)}]}$. To simplify notation, if there is no confusion, we denote $ \pi_j^{[s, \xi^{(n)}]}$ by $\pi_j^{(n)}$.
The monic polynomials $\pi_j^{(n)}$'s are given by the Christoffel formula (\cite[Thm 2.5.]{Szego-OP}): \begin{align*} \pi_j^{(n)}(t) = \frac{1}{\prod_{i = 1}^m (\xi^{(n)}_i-t)^2} \cdot \frac{D_j^{(n)} (t) }{ k_{j+2m}^{(s)} \cdot \delta_j^{(n)}},\end{align*} where
$$ D_j^{(n)} (t) = \left | \begin{array}{cccc} P_j^{(s)} (\xi^{(n)}_1) & P_{j+1}^{(s)}(\xi_1^{(n)}) & \cdots & P_{j+2 m}^{(s)}(\xi^{(n)}_1) \vspace{3mm}
\\ \vdots & \vdots & & \vdots
\\ P_j^{(s)} (\xi^{(n)}_m) & P_{j+1}^{(s)}(\xi_m^{(n)}) & \cdots & P_{j+2 m}^{(s)}(\xi^{(n)}_m) \vspace{3mm}
\\ \dot{ P}_j^{(s)} (\xi^{(n)}_1) & \dot{P}_{j+1}^{(s)}(\xi_1^{(n)}) & \cdots & \dot{P}_{j+2 m}^{(s)}(\xi^{(n)}_1) \vspace{3mm}
\\ \vdots & \vdots & & \vdots
\\ \dot{P}_j^{(s)} (\xi^{(n)}_m) & \dot{P}_{j+1}^{(s)}(\xi_m^{(n)}) & \cdots & \dot{P}_{j+2 m}^{(s)}(\xi^{(n)}_m) \vspace{3mm}
\\ P_j^{(s)} (t) & P_{j+1}^{(s)}(t) & \cdots & P_{j+2 m}^{(s)}(t) \end{array} \right |;$$ and
$$\delta_j^{(n)} = \left | \begin{array}{cccc} P_j^{(s)} (\xi^{(n)}_1) & P_{j+1}^{(s)}(\xi_1^{(n)}) & \cdots & P_{j+2 m-1}^{(s)}(\xi^{(n)}_1) \vspace{3mm}
\\ \vdots & \vdots & & \vdots
\\ P_j^{(s)} (\xi^{(n)}_m) & P_{j+1}^{(s)}(\xi_m^{(n)}) & \cdots & P_{j+2 m-1}^{(s)}(\xi^{(n)}_m) \vspace{3mm}
\\ \dot{ P}_j^{(s)} (\xi^{(n)}_1) & \dot{P}_{j+1}^{(s)}(\xi_1^{(n)}) & \cdots & \dot{P}_{j+2 m -1}^{(s)}(\xi^{(n)}_1) \vspace{3mm}
\\ \vdots & \vdots & & \vdots
\\ \dot{P}_j^{(s)} (\xi^{(n)}_m) & \dot{P}_{j+1}^{(s)}(\xi_m^{(n)}) & \cdots & \dot{P}_{j+2 m - 1}^{(s)}(\xi^{(n)}_m) \end{array} \right |. $$
\begin{defn} Let $ h_j^{[s, \xi^{(n)}]} = \int_{-1}^1 \Big\{ \pi_j^{(n)} (t) \Big\}^2 w_s^{[\xi^{(n)}]}(t)dt .$
\end{defn}
\begin{prop}
For any $j \ge 0$, we have
$$h_{j}^{[s, \xi^{(n)}]} = \frac{h_{j}^{(s)}}{k_{j}^{(s)} k_{j+2m}^{(s)}}\cdot \frac{\delta_{j+1}^{(n)}}{\delta_{j}^{(n)}}.$$
\end{prop}
\begin{proof}
By orthogonality, for any $\ell \ge 1$, we have $$\int_{-1}^1 P^{(s)}_{j+u}(t) \pi_j^{(n)}(t) w_s(t) dt = 0.$$ Note that $$D_j^{(n)} = \delta_{j+1}^{(n)} P_j^{(s)}(t) + \text{ linear combination of $ P_{j + 1}^{(s)}, \cdots, P_{j+2m}^{(s)}$.} $$ Hence
\begin{align*} h_{j}^{[s, \xi^{(n)}]} & = \frac{1}{k_{j+ 2m}^{(s)} \delta_{j}^{(n)}} \int D_{j}^{(n)} (t) \pi_j^{(n)} (t) w_s(t) dt \\ & = \frac{1}{k_{j + 2m}^{(s)} \delta_{j}^{(n)}} \int \delta_{j+1}^{(n)} P_{j}^{(s)} (t) \pi_j^{(n)} (t) w_s(t) dt \\ & = \frac{\delta_{j+1}^{(n)} }{k_{j + 2m}^{(s)} \delta_{j}^{(n)}} \int \Big\{P_{j}^{(s)} (t) \Big\}^2 \frac{1}{k_j^{(s)}}w_s(t) dt \\ & = \frac{h_{j}^{(s)}}{k_{j}^{(s)} k_{j+2m}^{(s)}}\cdot \frac{\delta_{j+1}^{(n)}}{\delta_{j}^{(n)}}.\end{align*}
\end{proof}
By the Christoffel-Darboux formula (cf. \cite[Thm 3.2.2]{Szego-OP}), we have: \begin{align*} & K_n^{[s, \xi^{(n)}]} (x_1^{(n)}, x_2^{(n)}) = \sqrt{w_s^{[\xi^{(n)}]} (x_1^{(n)}) w_s^{[\xi^{(n)}]} (x_2^{(n)}) }\cdot \sum_{ j = 0}^{n-1} \frac{\pi_j^{(n)} (x_1^{(n)}) \cdot \pi_j^{(n)}(x_2^{(n)}) }{h_j^{[s, \xi^{(n)}]}} \\ & = \frac{\sqrt{w_s^{[\xi^{(n)}]} (x_1^{(n)}) w_s^{[\xi^{(n)}]} (x_2^{(n)}) } }{h_{n-1}^{[s, \xi^{(n)}]}} \cdot \frac{\pi_n^{(n)}(x_1^{(n)}) \cdot \pi_{n-1}^{(n)} (x_2^{(n)}) - \pi_n^{(n)}(x_2^{(n)}) \cdot \pi_{n-1}^{(n)} (x_1^{(n)})}{ x_1^{(n)} - x_2^{(n)}}. \end{align*}
After change of variables $x_i^{(n)} = 1 - \frac{z_i }{2n^2}, z_i \in [ 0, 4n^2], i = 1, 2$, and let $\xi^{(n)}$ takes the form as in the regime \eqref{regime-case1}, these kernels can be written as: \begin{align}\label{C-D-good} & \widetilde{K}_n^{[s, \xi^{(n)}]} (z_1, z_2) = \frac{1}{2 n^2} K_n^{[s, \xi^{(n)}]} \Big( 1 - \frac{z_1}{2 n^2}, 1 - \frac{z_2}{2 n^2}\Big) \\ & = \frac{(z_1 z_2)^{\frac{s}{2}}}{\left| \prod_{i=1}^m (z_1 - w_i) (z_2 - w_i )\right|} \cdot S_n(z_1, z_2), \nonumber \end{align} where \begin{align}\label{S-good-1} S_n(z_1, z_2) = (2n^2)^{2m -s -1} \sum_{j = 0}^{n-1} \frac{D_j^{(n)} (1- \frac{z_1}{2n^2} ) D_j^{(n)} (1- \frac{z_2}{2n^2} )}{ \frac{h_j^{(s)} k_{j+2m}^{(s)} }{ k_j^{(s)} } \delta_j^{(n)} \delta_{j+1}^{(n)} }, \end{align} or equivalently \begin{align}\label{S-good-2} & \quad S_n(z_1, z_2) = \frac{(2n^2)^{2m-s}}{ \frac{h_{n-1}^{(s)} k_{n+2m}^{(s)} }{ k_{n-1}^{(s)} } \Big[ \delta_n^{(n)} \Big]^2 } \times \\ & \times \frac{ D_n^{(n)}(1- \frac{z_1}{2n^2} ) \cdot D_{n-1}^{(n)} ( 1- \frac{z_2}{2n^2} ) - D_n^{(n)}(1- \frac{z_2}{2n^2} ) \cdot D_{n-1}^{(n)} ( 1- \frac{z_1}{2n^2} ) }{z_2 - z_1 }. \nonumber \end{align}
\subsubsection{Scaling limits.} To obtain the scaling limit of the Christoffel-Darboux kernels $\widetilde{K}_n^{[s, \xi^{(n)}]} (z_1, z_2)$, we shall investigate the asymptotics of the formulae \eqref{S-good-1} or \eqref{S-good-2}. These two representations \eqref{S-good-1} and \eqref{S-good-2} will yield different representations of the scaling limit: an integrable form and an integral form.
The following formulae are well-known (\cite[p.63, p.68]{Szego-OP}): \begin{align}\label{leading-norm} k_j^{(s)} = \frac{1}{2^j \cdot j !} \frac{\Gamma(2 j+s + 1)}{\Gamma( j + s + 1)}, \quad h_j^{(s)} = \frac{2^{s+1}}{2 j + s +1}. \end{align} The following lemma will be used frequently in the sequel.
\begin{lem}
Let $ p\in \Z$, then $$\lim_{n \to \infty} \frac{k_{\kappa_n + p}^{(s)}}{ k_{\kappa_n}^{(s)}} = 2^p.$$
\end{lem}
\begin{proof}
It is an easy consequence of \eqref{leading-norm} and the Stirling's approximation formula for Gamma functions, here we can also use the following convenient formula: $$ \text{ for all } a \in \R, \quad\lim_{n \to \infty} \frac{\Gamma( n + a)}{n^a \Gamma(n)} = 1 . $$
\end{proof}
\begin{prop}\label{C}
If the sequence $\xi^{(n)}$ satisfies \eqref{regime-case1} , then $$ \lim_{n \to \infty} n^{2m^2 -2sm-3m} \delta_{\kappa_n}^{(n)} = \frac{2^{2ms}}{(w_1 \cdots w_m)^{1+s}} C_I^{(s, w)}(\kappa), $$ where $$C_I^{(s,w)}(\kappa) =W(J_{s, w_1}, \cdots, J_{s, w_m}, \widetilde{J}_{s+1, w_1}, \cdots \widetilde{J}_{s+1, w_m})(\kappa) $$ and $$J_{s, w_i} (\kappa) : = J_s(\kappa \sqrt{w_i}), \quad \widetilde{J}_{s+1, w_i} (\kappa)= \widetilde{J}_{s+1} (\kappa \sqrt{w_i}).$$ Moreover, the convergence is uniform as long as $\kappa$ is in a compact subset of $(0, \infty)$.
\end{prop}
\begin{proof}
To simplify notation, we denote $\Delta_{P, n}^{(s; \, \ell)}$ by $\Delta_{P, n}^{[\ell]}$ in this proof. By the multi-linearity of the determinant on columns, we have \begin{eqnarray*} \delta_{\kappa_n}^{(n)} = \left | \begin{array}{cccc} \Delta_{P, \kappa_n}^{[0]} (\xi^{(n)}_1) & \Delta_{P, \kappa_n}^{[1]} (\xi^{(n)}_1) & \cdots & \Delta_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_1) \vspace{3mm}
\\ \vdots & \vdots & & \vdots
\\ \Delta_{P, \kappa_n}^{[0]} (\xi^{(n)}_m) & \Delta_{P, \kappa_n}^{[1]} (\xi^{(n)}_m) & \cdots & \Delta_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_m) \vspace{3mm}
\\ \dot{\Delta}_{P, \kappa_n}^{[0]} (\xi^{(n)}_1) & \dot{\Delta}_{P, \kappa_n}^{[1]} (\xi^{(n)}_1) & \cdots & \dot{\Delta}_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_1) \vspace{3mm}
\\ \vdots & \vdots & & \vdots
\\ \dot{\Delta}_{P, \kappa_n}^{[0]} (\xi^{(n)}_m) & \dot{\Delta}_{P, \kappa_n}^{[1]} (\xi^{(n)}_m) & \cdots & \dot{\Delta}_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_m) \end{array} \right | .\end{eqnarray*} Multiplying the matrix used in the above formula on right by the diagonal matrix $\text{diag}( n^{-s}, n^{1-s}, \cdots, n^{2m-1-s})$ and on left by the diagonal matrix $\text{diag}(\underbrace{1, \cdots, 1}_{\text{$m$ terms }}, \underbrace{n^{-2}, \cdots, n^{-2}}_{\text{$m$ terms} })$ and taking determinant, we obtain that $n^{2m^2 -2sm-3m} \delta_{\kappa_n}^{(n)}$ equals to the following determinant $$ \left | \begin{array}{cccc} n^{-s}\Delta_{P, \kappa_n}^{[0]} (\xi^{(n)}_1) & n^{1-s} \Delta_{P, \kappa_n}^{[1]} (\xi^{(n)}_1) & \cdots & n^{2m-1-s} \Delta_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_1) \vspace{3mm} \\ \vdots & \vdots & & \vdots \\ n^{-s} \Delta_{P, \kappa_n}^{[0]} (\xi^{(n)}_m) & n^{1-s} \Delta_{P, \kappa_n}^{[1]} (\xi^{(n)}_m) & \cdots & n^{2m-1-s}\Delta_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_m) \vspace{3mm} \\ n^{-2-s}\dot{\Delta}_{P, \kappa_n}^{[0]} (\xi^{(n)}_1) & n^{-1-s}\dot{\Delta}_{P, \kappa_n}^{[1]} (\xi^{(n)}_1) & \cdots & n^{2m-3-s} \dot{\Delta}_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_1) \vspace{3mm} \\ \vdots & \vdots & & \vdots \\ n^{-2-s}\dot{\Delta}_{P, \kappa_n}^{[0]} (\xi^{(n)}_m) & n^{-1-s}\dot{\Delta}_{P, \kappa_n}^{[1]} (\xi^{(n)}_m) & \cdots & n^{2m-3-s}\dot{\Delta}_{P, \kappa_n}^{[2m-1]} (\xi^{(n)}_m) \end{array} \right |.$$ Applying Propositions \ref{jacobi-asymp} and \ref{der-asymp}, we obtain the desired formula. The last statement follows from the uniform convergences in Propositions \ref{jacobi-asymp}, \ref{der-asymp}.
\end{proof}
\begin{prop}\label{A}
If the sequences $x_i^{(n)}$ and $\xi^{(n)}$ satisfy \eqref{regime-case1} , then \begin{align*} \lim_{n \to \infty} n^{2m^2 - m -2ms-s} D^{(n)}_{\kappa_n}(x_i^{(n)}) = \frac{2^{2ms +s}}{(w_1 \cdots w_m)^{1+s}} z_i^{-\frac{s}{2}}\cdot A_I^{(s, w)} (\kappa, z_i), \end{align*} where $$A_I^{(s, w)}(\kappa, z_i) = W\Big(J_{s, w_1}, \cdots, J_{s, w_m}, \widetilde{J}_{s+1, w_1}, \cdots, \widetilde{J}_{s+1, w_m}, J_{s, z_i}\Big)(\kappa).$$
Moreover, the convergence is uniform as long as $\kappa$ is in a compact subset of $(0, \infty)$.
\end{prop}
\begin{proof}
The proof is similar to that of Proposition \ref{C}.
\end{proof}
\begin{defn} Define the column vector function $\boldsymbol{\theta}_j^{(n)}(t)$ by $$ \boldsymbol{\theta}_j^{(n)}(t) = \Big( P_j^{(s)} (\xi^{(n)}_1) , \cdots, P_j^{(s)} (\xi^{(n)}_m) , \dot{ P}_j^{(s)} (\xi^{(n)}_1) , \cdots,
\dot{P}_j^{(s)} (\xi^{(n)}_m), P_j^{(s)} (t) \Big)^T.$$
\end{defn}
\begin{prop}\label{B}
If the sequences $x_i^{(n)}$ and $\xi^{(n)}$ satisfy \eqref{regime-case1}, then \begin{align*} & \lim_{n \to \infty} n^{1+ 2m^2 - m -2ms-s} \left| \boldsymbol{\theta}_{\kappa_n}^{(n)}(x_i^{(n)}) \, \cdots \, \boldsymbol{\theta}_{\kappa_n +2m-1}^{(n)} (x_i^{(n)} ) \quad \boldsymbol{\theta}_{\kappa_n+2m}^{(n)} (x_i^{(n)}) - \boldsymbol{\theta}_{\kappa_n-1}^{(n)} (x_i^{(n)}) \right| \\ & = \frac{2^{2ms +s}}{(w_1 \cdots w_m)^{1+s}} z_i^{-\frac{s}{2}}\cdot B_I^{(s, w)}(\kappa, z_i), \end{align*} where $B_I^{(s, w)}(\kappa, z_i) = \left | \begin{array}{ccccc}\boldsymbol{\eta}_{s, z_i}(\kappa) & \boldsymbol{\eta}_{s, z_i}'(\kappa) & \cdots & \boldsymbol{\eta}_{s, z_i}^{(2m-1)}(\kappa) & \boldsymbol{\eta}_{s, z_i}^{(2m+1)}(\kappa) \end{array} \right|$ and $\boldsymbol{\eta}_{s, z_i}(\kappa)$ is the column vector $$\Big(J_s(\kappa \sqrt{w_1}), \cdots, J_s(\kappa \sqrt{w_m} ), \widetilde{J}_{s+1}(\kappa \sqrt{w_1}), \cdots, \widetilde{J}_{s+1}(\kappa \sqrt{w_1}), J_s(\kappa \sqrt{z_i})\Big)^T.$$
\end{prop}
\begin{proof}
The proof is similar to that of Proposition \ref{C}, we emphasize that in the proof we used the elementary fact \begin{align*} \Delta_{P, \kappa_n-1}^{[2m+1]} = & P_{\kappa_n+2m}^{(s)} +(-1)^{2m+1} P_{\kappa_n-1}^{(s)} \\ + & \text{ linear combination of $ P_{\kappa_n}^{(s)}, P_{\kappa_n+1}^{(s)}, \cdots, P_{\kappa_n + 2m - 1}^{(s)}$}. \end{align*}
\end{proof}
\begin{rem}\label{A-B-relation}
By the property of determinant, it is easy to see that $$\frac{\partial}{\partial \kappa} A_I^{(s, w)} (\kappa, z_i) = B_I^{(s, w)} (\kappa, z_i). $$
\end{rem}
\begin{thm}\label{thm-case1}
In the regime \eqref{regime-case1} , we have
\begin{align*} & \lim_{n \to \infty} \widetilde{K}_n^{[s, \xi^{(n)}]} (z_1, z_2) \\ = & \frac{A_I^{(s,w)}(1, z_1) B_I^{(s,w)}(1, z_2) - A_I^{(s,w)}(1, z_2) B_I^{(s,w)}(1, z_1) }{ 2 \Big| \prod_{i= 1}^m (z_1 - w_i) (z_2 - w_i )\Big|\cdot \big[C_I^{(s, w)}(1) \big]^2 \cdot (z_1-z_2)}. \end{align*}
We denote this kernel by $\mathscr{K}_\infty^{[s, \xi]} (z_1, z_2)$.
\end{thm}
\begin{proof}
It is easy to see that \begin{align*} D_{n-1}^{(n)} (x_i^{(n)}) = \left| \boldsymbol{\theta}_{n}^{(n)}(x_i^{(n)}) \cdots \boldsymbol{\theta}_{n+2m-1}^{(n)} (x_i^{(n)}) \quad \boldsymbol{\theta}_{n-1}^{(n)} (x_i^{(n)}) \right| , \end{align*} hence $D_n^{(n)} (x_i^{(n)}) - D_{n-1}^{(n)} (x_i^{(n)}) $ equals to $$ \left| \boldsymbol{\theta}_{n}^{(n)}(x_i^{(n)}) \cdots \boldsymbol{\theta}_{n+2m-1}^{(n)} (x_i^{(n)}) \quad \boldsymbol{\theta}_{n +2m}^{(n)} (x_i^{(n)}) - \boldsymbol{\theta}_{n-1}^{(n)} (x_i^{(n)}) \right|.$$ Note that \begin{align*} & D_n^{(n)} (x_1^{(n)}) D_{n-1}^{(n)} (x_2^{(n)}) - D_n^{(n)} (x_2^{(n)}) D_{n-1}^{(n)} (x_1^{(n)}) \\ = &
D_n^{(n)}(x_2^{(n)} ) \Big[ D_n^{(n)} (x_1^{(n)}) - D_{n-1}^{(n)} (x_1^{(n)})\Big] \\ & - D_n^{(n)}(x_1^{(n)} ) \Big[ D_n^{(n)} (x_2^{(n)}) - D_{n-1}^{(n)} (x_2^{(n)})\Big]. \end{align*} Now applying Propositions \ref{A} and \ref{B}, we obtain that \begin{align*} & \lim_{n \to \infty} n^{1 + 4m^2 - 2m - 4ms -2s } \Big[ D_n^{(n)} (x_1^{(n)}) D_{n-1}^{(n)} (x_2^{(n)}) - D_n^{(n)} (x_2^{(n)}) D_{n-1}^{(n)} (x_1^{(n)})\Big] \\ &= \frac{2^{4ms +2s} (z_1z_2)^{-\frac{s}{2}} }{(w_1 \cdots w_m)^{2+2s}} \Big( A_I^{(s, w)} (1, z_2) B_I^{(s,w)}(1, z_1) - A_I^{(s, w)} (1, z_1) B_I^{(s,w)}(1, z_2) \Big). \end{align*} Combining with Proposition \ref{C}, we deduce that \begin{align*} & \lim_{n \to \infty} S_n(z_1, z_2) \\ = & (z_1z_2)^{- \frac{s}{2}} \cdot \frac{A_I^{(s,w)}(1, z_1) B_I^{(s,w)}(1, z_2) - A_I^{(s,w)}(1, z_2) B_I^{(s,w)}(1, z_1) }{ 2 \big[C_I^{(s, w)} (1) \big]^2 (z_1-z_2)}. \end{align*} Substituting the above formula in \eqref{C-D-good}, we get the desired result.
\end{proof}
\begin{thm}\label{integral-form-1}
The kernel $ \mathscr{K}_\infty^{(s, \xi)}(z_1, z_2)$ has the following integral form: \begin{align*} & \mathscr{K}_\infty^{(s, \xi)}(z_1, z_2) \\ = & \frac{1}{2 \Big| \prod_{i= 1}^m (z_1 - w_i) (z_2 - w_i )\Big|} \int_0^1 \frac{ A_I^{(s, w)}(t, z_1) A_I^{(s, w)}(t, z_2) }{ \big[C_I^{(s, w)} (t) \big]^2 } t dt .\end{align*}
\end{thm}
\begin{proof}
Let us fix $ z_1, z_2 > 0$. For any $\varepsilon > 0$, we can divide the sum in \eqref{S-good-1} into two parts:\begin{align*} S_n(z_1, z_2) & = \underbrace{(2n^2)^{2m -s -1} \sum_{j = 0}^{\lfloor n \varepsilon \rfloor -1 } \cdots}_{ = : \, I_n(\varepsilon)} \, + \, \underbrace{(2n^2)^{2m -s -1} \sum_{j = \lfloor n \varepsilon \rfloor }^{n-1} \cdots}_{ = : \, II_n(\varepsilon)}.\end{align*}
The second term $II_n(\varepsilon)$ can be written as an integral: \begin{align*} II_n(\varepsilon) & = \int_{\Big[\frac{\lfloor n \varepsilon \rfloor }{n}, 1\Big)} \underbrace{(2n^2)^{2m -s -1} \frac{D_{\lfloor n t \rfloor}^{(n)} (x_1^{(n)} ) D_{\lfloor n t \rfloor}^{(n)} (x_2^{(n)} )}{ \frac{h_{\lfloor n t \rfloor}^{(s)} k_{\lfloor n t \rfloor+2m}^{(s)} }{ k_{\lfloor n t \rfloor}^{(s)} } \delta_{\lfloor n t \rfloor}^{(n)} \delta_{\lfloor n t \rfloor+1}^{(n)} } \cdot n}_{ = : \, T_n (t) } \quad dt .\end{align*} By Propositions \ref{C} and \ref{A}, we have the uniform convergence for $t \in [\varepsilon, 1]$: \begin{align*} & \lim_{n \to \infty} T_n(t) = \frac{(z_1z_2)^{- \frac{s}{2}}}{2 (C^{[s, w]} (t) )^2 } A_I^{(s, w)}(t, z_1) A_I^{(s, w)}(t, z_2) t ,\end{align*} hence as $n \to \infty$, $II_n(\varepsilon)$ tends to $$II_\infty(\varepsilon) = \int_\varepsilon^1 \frac{(z_1z_2)^{- \frac{s}{2}}}{2 (C_I^{(s, w)} (t) )^2 } A_I^{(s, w)}(t, z_1) A_I^{(s, w)}(t, z_2) t dt . $$
For the first term $I_n(\varepsilon)$, we use Christoffel-Darboux formula to write it as \begin{align*} \frac{(2n^2)^{2m-s}}{ \frac{h_{\lfloor n \varepsilon \rfloor-1}^{(s)} k_{\lfloor n \varepsilon \rfloor+2m}^{(s)} }{ k_{\lfloor n \varepsilon \rfloor-1}^{(s)} } } \frac{ D_{\lfloor n \varepsilon \rfloor}^{(n)}(x_1^{(n)} ) \cdot D_{\lfloor n \varepsilon \rfloor-1}^{(n)} ( x_2^{(n)} ) - D_{\lfloor n \varepsilon \rfloor}^{(n)}(x_2^{(n)} ) \cdot D_{\lfloor n \varepsilon \rfloor-1}^{(n)} ( x_1^{(n)} ) }{ \big[ \delta_{\lfloor n \varepsilon \rfloor}^{(n)} \big]^2 (z_2 - z_1) }. \end{align*} By similar arguments as in the proof of Theorem \ref{thm-case1}, $$\lim_{n\to \infty} I_n(\varepsilon) = I_\infty(\varepsilon),$$ where $I_\infty(\varepsilon)$ is given by the formula \begin{align*} & I_\infty(\varepsilon) \\ = & \frac{(z_1z_2)^{-\frac{s}{2}}}{2} \cdot \frac{A_I^{(s,w)}(\varepsilon, z_1) B_I^{(s,w)}(\varepsilon, z_2) - A_I^{(s,w)}(\varepsilon, z_2) B_I^{(s,w)}(\varepsilon, z_1) }{ \big[C_I^{(s, w)}(\varepsilon) \big]^2 (z_1-z_2)} \cdot \varepsilon.\end{align*} Hence for any $\varepsilon > 0$, we have $$ \lim_{n \to \infty} S_n(z_1^{(n)}, z_2^{(n)}) = I_\infty(\varepsilon) + II_\infty(\varepsilon) .$$ The theorem is completely proved if we can establish $\lim_{ \varepsilon \to 0} I_\infty(\varepsilon) = 0. $ This is given by the following lemma.
\end{proof}
\begin{lem}\label{lem-case1}
For any $z_1, z_2 >0$, we have
$$\lim_{\varepsilon \to 0 + } \frac{A_I^{(s,w)}(\varepsilon, z_1) B_I^{(s,w)}(\varepsilon, z_2) - A_I^{(s,w)}(\varepsilon, z_2) B_I^{(s,w)}(\varepsilon, z_1) }{ \big[C_I^{(s, w)}(\varepsilon) \big]^2} \cdot \varepsilon = 0.$$
\end{lem}
\begin{proof}
To simplify notation, let us denote $F_i = J_{s, w_i}$ and $G_i = \widetilde{J}_{s+1, w_i}$. We have $$C_I^{[s,w]}(\varepsilon) = W(F_1, \cdots, F_m, G_1, \cdots, G_m) (\varepsilon).$$ By \eqref{differential-relation-J}, we have $$G_i(\varepsilon) = - \varepsilon F_i'(\varepsilon) + s F_i(\varepsilon). $$ If we denote $H_i(\varepsilon) = - \varepsilon F_i'(\varepsilon)$, then $$C^{[s, w]}(\varepsilon) = W(F_1,\cdots, F_m, H_1, \cdots, H_m)(\varepsilon).$$ We can write $F_i(\varepsilon) = \sum_{\nu =0}^\infty \frac{a_\nu^{(i)}}{\nu !} \varepsilon^{2\nu + s}$ and $H_i(\varepsilon)= \sum_{\nu=0}^\infty \frac{b_\nu^{(i)}}{\nu ! } \varepsilon^{2 \nu + s}$,
with $$ a_\nu^{(i) } = \frac{(-1)^{\nu} (\sqrt{w_i})^{2\nu + s} }{2^{2\nu + s} \Gamma(\nu + s+1) } , \quad b_\nu^{(i)} = - (2\nu +s) a_\nu^{(i)}. $$ Define entire functions: $$f_i(x) =\sum_{\nu = 0}^\infty \frac{a_\nu^{(i)}}{\nu !} x^{\nu}, \quad h_i(x) = \sum_{\nu = 0}^\infty \frac{b_\nu^{(i)} }{\nu !}x^\nu.$$ Then $F_i(\varepsilon) = \varepsilon^s f_i(\varepsilon^2 )$ and $H_i(\varepsilon) = \varepsilon^s h_i(\varepsilon^2)$. Using the identity $$W(g f_1, \cdots, g f_n)(x) = g(x)^n \cdot W(f_1, \cdots, f_n)(x),$$ we obtain that $$C_I^{(s, w)} (\varepsilon) =\varepsilon^{2ms} \cdot W\Big( f_1(x^2), \cdots, f_m(x^2), h_1(x^2), \cdots, h_m(x^2) \Big)(\varepsilon). $$ An application of the following identity $$ \frac{d^{n}}{dx^n} \Big[ f(x^2) \Big] = n ! \sum_{k = 0}^{\lfloor n/2\rfloor} \frac{(2x)^{n-2k}}{ k ! (n - 2k)!} f^{(n-k)} (x^2)$$ yields $$C_I^{(s, w)}(\varepsilon) = 2^{m(2m-1)} \varepsilon^{2ms + m(2m-1)} W(f_1, \cdots, f_m, h_1, \cdots, h_m)(\varepsilon^2). $$
We state the following simple auxiliary
\begin{lem}\label{sublem}
$W(f_1, \cdots, f_m, h_1, \cdots, h_m)(z)$ is an entire function and does not vanish at $z= 0$.
\end{lem}
Before proving Lemma \ref{sublem}, we derive from it Lemma \ref{lem-case1}. Indeed, from Lemma \ref{sublem} we have $$C_I^{(s, w)}(\varepsilon) \asymp \varepsilon^{2ms + m(2m-1)} \text{ as } \varepsilon \to 0.$$ Similarly, $$ A_I^{(s, w)}(\varepsilon, z_i ) \asymp \varepsilon^{(2m + 1)s + m(2m + 1)} \text{ as } \varepsilon \to 0. $$ By Remark \ref{A-B-relation}, we also have $$B_I^{(s, w)}(\varepsilon, z_i ) \asymp \varepsilon^{(2m + 1)s + m(2m + 1) - 1 } \text{ as } \varepsilon \to 0. $$ Hence as $ \varepsilon \to 0, $ we have \begin{align*} \left |\frac{A_I^{(s,w)}(\varepsilon, z_1) B_I^{(s,w)}(\varepsilon, z_2) - A_I^{(s,w)}(\varepsilon, z_2) B_I^{(s,w)}(\varepsilon, z_1) }{ \Big[C_I^{(s, w)}(\varepsilon) \Big]^2} \cdot \varepsilon \right | \lesssim \varepsilon^{4m+2s }. \end{align*} Since we always have $4m + 2s > 0$, Lemma \ref{lem-case1} is proved.
\bigskip
Now we turn to the proof of the Lemma \ref{sublem}. By definition, we know that $f_i, h_i$ are entire functions, hence $W(f_1, \cdots, f_m, h_1, \cdots, h_m)$ is also entire. It is easily to see that $W(f_1, \cdots, f_m, h_1, \cdots, h_m) (0)$ equals to $$\left|\begin{array}{ccccc} a^{(1)}_0 & a^{(1)}_1 & a^{(1)}_2 & \cdots & a^{(1)}_{2m-1} \\ \vdots & \vdots & \vdots & &\vdots \\ a^{(m)}_0 & a^{(m)}_1 & a^{(m)}_2 & \cdots & a^{(m)}_{2m-1} \vspace{3mm} \\ b^{(1)}_0 & b^{(1)}_1 & b^{(1)}_2 & \cdots & b^{(1)}_{2m-1} \\ \vdots & \vdots & \vdots & &\vdots \\ b^{(m)}_0 & b^{(m)}_1 & b^{(m)}_2 & \cdots & b^{(m)}_{2m-1} \end{array} \right|, $$ which is in turn given by a non-zero multiple of $ \det \mathscr{W}, $ where $$ \mathscr{W} = \left( \begin{array}{ccccc} 1 & w_1 & w_1^2 & \cdots & w_1^{2m-1} \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & w_m & w_m^2 & \cdots & w_m^{2m-1} \vspace{3mm} \\ 0 & 1 & 2 w_1 & \cdots & (2m-1) w_1^{2m-2} \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 1 & 2 w_m & \cdots & (2m-1) w_m^{2m-2} \end{array}\right). $$ We claim that $\det \mathscr{W} \ne 0$. Indeed, let $\theta = (\theta_0, \theta_1, \cdots, \theta_{2m-1})^{T} $ be such that $\mathscr{W} \theta = 0$. In other words, we have \begin{align*} \sum_{k=0}^{2m-1} \theta_k w_i^k = 0, \quad \sum_{k =0}^{2m-1} k \theta_kw_i^{k - 1} = 0, \text{ for } 1 \le i \le m .\end{align*} Let $\Theta$ be the polynomial given by $\Theta(x) = \sum_{k =0}^{2m-1} \theta_k x^k,$ then the above equations imply that $w_1, \cdots, w_m$ are distinct roots of $\Theta$, each $w_i$ has multiplicity at least 2. Since $\deg \Theta \le 2m-1$, we must have $\Theta \equiv 0$ and hence $\theta=0$. This shows that $\mathscr{W}$ is invertible, hence has a non-zero determinant.
\end{proof}
\subsection{Explicit Kernels for Scaling Limit: Case II} Consider a sequence of $m$-tuples of distinct positive real numbers $r^{(n)} = (r_1^{(n)}, \cdots, r_m^{(n)}) $ and the modified weights $w_s^{(r^{(n)})}$ given as follows: $$w_s^{(r^{(n)})} (t) =\frac{w_s(t)}{\prod_{ i = 1}^m ( 1 + r_i^{(n)} - t)} = \frac{(1 - t)^s}{ \prod_{i = 1}^m ( 1 + r^{(n)}_i - t) }. $$ The $n$-th Christoffel-Darboux kernel associated with $w_s^{(r^{(n)})}$ is denoted by $ \Pi_n^{(n)} (x_1, x_2)$.
We will investigate the scaling limit of $\Pi_n^{(n)} (x_1^{(n)}, x_2^{(n)})$ in the regime: \begin{align}\label{case-II} \begin{split} x_i^{(n)} = 1 - \frac{z_i }{2n^2}, \quad & z_i > 0, i = 1, 2. \\ r_i^{(n)} = \frac{v_i}{2n^2}, 1 \le i \le m, & \text{ and $v_i> 0$ are all distinct. } \end{split} \end{align}
\subsubsection{Explicit formulae for orthogonal polynomials and Christoffel-Darboux kernels} The Christoffel-Uvarov formula implies that the following polynomials $q_j^{(n)}$ for $j \ge m$ are orthogonal with respect to $ w_s^{( r^{(n)} )}$: $$q_j^{(n)} (t) = \left| \begin{array}{ccc} Q_{j-m}^{(s)} ( 1 + r_1^{(n)} ) & \cdots & Q_{j}^{(s)}(1 + r_1^{(n)}) \\ \vdots & & \vdots \\Q_{j-m}^{(s)} ( 1 + r_m^{(n)}) & \cdots & Q_{j}^{(s)}(1 + r_m^{(n)}) \\ P_{j-m}^{(s)}(t) & \cdots & P_j^{(s)}(t) \end{array} \right|. $$ For $0 \le j < m$, we also denote by $q_j^{(n)}$ the $j$-th monic orthogonal polynomial, here we will not give its explicit formula.
Denote $$ d_j^{(n)} = \left| \begin{array}{ccc} Q_{j-m}^{(s)} ( 1 + r_1^{(n)}) & \cdots & Q_{j-1}^{(s)}(1 + r_1^{(n)}) \\ \vdots & & \vdots \\Q_{j-m}^{(s)} ( 1 + r_m^{(n)}) & \cdots & Q_{j-1}^{(s)}(1 + r_m^{(n)}) \end{array} \right|.$$ Denote by $k_j^{(s, r^{(n)})}$ the leading coefficient of $q_j^{(n)}$. When $j \ge m$, it is given by $k_j^{(s, r^{(n)})} = d_j^{(n)} k_j^{(s)} $.
\begin{defn}
Define $h_j^{(s, r^{(n)})} = \int_{-1}^1 \Big\{ q_j^{(n)}(t) \Big\}^2 w_s^{(r^{(n)})} (t) dt. $
\end{defn}
\begin{prop}
For any $j \ge m$, we have $$h_j^{(s, r^{(n)} ) } = \frac{d_{j}^{(n)} d_{j+1}^{(n)} k_j^{(s)}h_{j-m}^{(s)} } {k_{j-m}^{(s)}}. $$
\end{prop}
\begin{proof}
Let $j \ge m$, by the orthogonality, we have \begin{align*} & h_{j}^{(s,r^{(n)})} \\ = & \int q_j^{(n)}(t) d_{j}^{(n)} P_j^{(s)}(t) w_s^{(r^{(n)})}(t) dt = d_{j}^{(n)} k_j^{(s)} \int q_j^{(n)}(t) t^j w_s^{(r^{(n)})}(t) dt \\ = & d_{j}^{(n)} k_j^{(s)} \int q_j^{(n)}(t) (-1)^m \prod_{i = 1}^m (1+r_i^{(n)} - t) t^{j-m} w_s^{(r^{(n)})}(t) dt \\ = & (-1)^m d_{j}^{(n)} k_j^{(s)} \int q_j^{(n)}(t) t^{j-m} w_s(t) dt \\ = & (-1)^m d_{j}^{(n)} k_j^{(s)} \int (-1)^{m + 2} d_{j+1}^{(n)} P_{j-m}^{(s)} (t) t^{j-m} w_s(t) dt \\ = & \frac{d_{j}^{(n)} d_{j+1}^{(n)} k_j^{(s)} } {k_{j-m}^{(s)}} \int \Big\{ P_{j-m}^{(s)}(t) \Big\}^2 w_s(t) dt \\ = & \frac{d_{j}^{(n)} d_{j+1}^{(n)} k_j^{(s)}h_{j-m}^{(s)} } {k_{j-m}^{(s)}} .\end{align*}
\end{proof}
By change of variables, $x_i^{(n)} = 1 - \frac{z_i}{2n^2} $, and let $r^{(n)}$ be as in the regime \eqref{case-II}, the Christoffel-Darboux kernels are given by the formula \begin{align}\label{CD-2} \begin{split} & \widetilde{\Pi}_n^{(n)} ( z_1, z_2) = \frac{(z_1 z_2 )^{\frac{s}{2}} }{ \prod_{ i = 1}^m ( v_i+ z_1 )^{\frac{1}{2}} (v_i + z_2)^{\frac{1}{2}}} \Sigma_n (z_1, z_2 ), \end{split} \end{align} where \begin{align}\label{sigma-1} \Sigma_n(z_1, z_2) = (2n^2)^{m-s-1} \sum_{j = 0}^{n-1} \frac{q_j^{(n)} ( 1 - \frac{z_1}{2n^2} ) q_j^{(n)} (1 - \frac{z_2 }{2n^2} ) }{ \frac{d_{j}^{(n)} d_{j+1}^{(n)} k_j^{(s)}h_{j-m}^{(s)} } {k_{j-m}^{(s)}} }, \end{align} or equivalently \begin{align} \begin{split} & \Sigma_n(z_1, z_2) \\ = & \label{sigma-2} \frac{(2n^2)^{m-s}}{ \big[d_{n}^{(n)}\big]^2 \frac{h_{n-1 -m }^{(s)} k_{n}^{(s)} }{ k_{n-1-m}^{(s)}} } \cdot \frac{
q_{n}^{(n)}(x_1^{(n)} ) q_{n-1}^{(n)}(x_2^{(n)} ) - q_{n}^{(n)}(x_2^{(n)}) q_{n-1}^{(n)}(x_1^{(n)} )
}{
z_2 - z_1} . \end{split}\end{align}
\subsubsection{Scaling limits} Now we investigate the scaling limits.
\begin{prop}\label{AC-matrix}
In the regime \eqref{case-II} we have $$\lim_{n \to \infty} n^{\frac{m(m-1)}{2} - ms } d_{\kappa_n}^{(n)} = 2^{ms} (v_1 \cdots v_m)^{- \frac{s}{2} } C_{II}^{(s, v)}(\kappa), $$
$$\lim_{n \to \infty} n^{\frac{m(m+1)}{2} - (m+1)s } q_{\kappa_n}^{(n)} (x_i^{(n)}) = 2^{(m+1)s} (v_1 \cdots v_m)^{- \frac{s}{2} } z_i^{-\frac{s}{2}} A_{II}^{(s, v)}(\kappa, z_i), $$ where $$C_{II}^{(s, v)} (\kappa) = W\Big(K_{s, v_1}, \cdots, K_{s, v_m} \Big)(\kappa),$$ $$A_{II}^{(s,v)}(\kappa, z) =W\Big(K_{s, v_1}, \cdots, K_{s, v_m}, J_{s, z}\Big ) (\kappa), $$ and $K_{s, v_i} (\kappa) = K_s(\kappa \sqrt{v_i})$, $J_{s, z}(\kappa) = J_s(\kappa \sqrt{z})$.
\end{prop}
\begin{proof}
For $\ell \ge 1$, we have $$\Delta_{Q, n}^{(s, \ell)} = Q_{n + \ell}^{(s)} + (-1)^{\ell} Q_n^{(s)} + \text{linear combination of $Q_{n+1}^{(s)}, \cdots, Q_{n+\ell-1 }^{(s)}$}.$$ The same is true for $\Delta_{P, n}^{(s, \ell)}$ and with the same coefficients. Hence for $k_n \ge m$, we have \begin{align*} d_{\kappa_n}^{(n)} = \left| \begin{array}{ccc} \Delta_{Q, \kappa_n-m}^{(s, 0)} ( 1 + r_1^{(n)}) & \cdots & \Delta_{Q, \kappa_n-m}^{(s, m-1)} ( 1 + r_1^{(n)}) \\ \vdots & & \vdots \\ \Delta_{Q, \kappa_n-m}^{(s, 0)} ( 1 + r_m^{(n)}) & \cdots & \Delta_{Q, \kappa_n-m}^{(s, m-1)} ( 1 + r_m^{(n)}) \end{array} \right|; \end{align*}
\begin{align*} q_{\kappa_n}^{(n)} (x_i^{(n)} ) = \left| \begin{array}{ccc} \Delta_{Q, \kappa_n-m}^{(s, 0)} ( 1 + r_1^{(n)}) & \cdots & \Delta_{Q, \kappa_n-m}^{(s, m)} ( 1 + r_1^{(n)}) \\ \vdots & & \vdots \\ \Delta_{Q, \kappa_n-m}^{(s, 0)} ( 1 + r_m^{(n)}) & \cdots & \Delta_{Q, \kappa_n-m}^{(s, m)} ( 1 + r_m^{(n)}) \vspace{3mm} \\ \Delta_{P, \kappa_n-m}^{(s, 0)} ( x_i^{(n)}) & \cdots & \Delta_{P, \kappa_n-m}^{(s, m)} ( x_i^{(n)}) \end{array} \right| . \end{align*}
The proposition is completely proved by applying the same arguments as in the proof of Proposition \ref{C} and by applying Propositions \ref{jacobi-asymp}, \ref{der-asymp}, \ref{asymp-Q} and \ref{asymp-diff-Q}.
\end{proof}
\begin{prop}\label{B-matrix}
In the regime \eqref{case-II}, we have \begin{align*} & \lim_{n \to \infty} n^{\frac{m(m+1)}{2} - (m + 1)s + 1 } \Big[ q_{\kappa_n}^{(n )} (x_i^{(n)}) - q_{\kappa_n-1}^{(n)} (x_i^{(n)}) \Big] \\ & = 2^{(m +1)s} (v_1 \cdots v_m)^{-\frac{s}{2}} z_i^{-\frac{s}{2}} B_{II}^{(s, v)}(\kappa, z_i), \end{align*} where \begin{align*} & B_{II}^{(s, v)}(\kappa, z) = \frac{\partial}{\partial \kappa} A_{II}^{(s, v)}(\kappa, z) \\ = & \left| \boldsymbol{\phi}_{s, z}(\kappa) , \boldsymbol{\phi}_{s, z}'(\kappa), \cdots, \boldsymbol{\phi}_{s, z}^{(m-1)}(\kappa), \boldsymbol{\phi}_{s, z}^{(m+1)}(\kappa) \right|, \end{align*} and $\boldsymbol{\phi}_{s, z}(\kappa)$ is the column vector $\Big(K_s( \kappa \sqrt{v_1}), \cdots K_s( \kappa \sqrt{v_m}), J_s( \kappa \sqrt{z})\Big)^T$.
\end{prop}
\begin{proof}
To simplify notation, we show the proposition in the case $\kappa_n = n$, the proof in the general case is similar. Define column vector $$ \beta_j^{(n)}(t) = \Big( Q_j^{(s)}(1 + r_1^{(n)}), \cdots, Q_j^{(s)}(1 + r_m^{(n)}), P_j^{(s)}(t) \Big)^T.$$ Then for $ i = 1, 2$, \begin{align*} q_n^{(n)} (x_i^{(n)}) = & \left| \begin{array}{cccc} \beta_{n-m}^{(n)}(x_i^{(n)}) &\cdots & \beta_{n-1}^{(n)}(x_i^{(n)}) & \beta_{n}^{(n)}(x_i^{(n)}) \end{array} \right|; \\ q_{n-1}^{(n)} (x_i^{(n)}) =& \left| \begin{array}{cccc} \beta_{n-1-m}^{(n)}(x_i^{(n)}) & \cdots & \beta_{n-2}^{(n)}(x_i^{(n)}) & \beta_{n-1}^{(n)}(x_i^{(n)}) \end{array} \right| \\ = & (-1)^m \left| \begin{array}{cccc} \beta_{n-m}^{(n)}(x_i^{(n)}) & \cdots & \beta_{n-1}^{(n)}(x_i^{(n)}) & \beta_{n-1-m}^{(n)}(x_i^{(n)}) \end{array} \right|. \end{align*} Hence \begin{align*} & q_n^{(n)} (x_i^{(n)}) - q_{n-1}^{(n)} (x_i^{(n)}) \\ =& \left| \begin{array}{cccc} \beta_{n-m}^{(n)}(x_i^{(n)}) &\cdots & \beta_{n-1}^{(n)}(x_i^{(n)}) & \beta_{n}^{(n)}(x_i^{(n)}) + (-1)^{m+1} \beta_{n-1-m}^{(n)}(x_i^{(n)}) \end{array} \right| \\ = & \left| \begin{array}{cccc}\Delta_{Q,n-m}^{(s, 0)} ( 1 + r_1^{(n)}) & \cdots & \Delta_{Q,n-m}^{(s, m-1)} ( 1 + r_1^{(n)}) & \Delta_{Q,n-1-m}^{(s, m+1)} ( 1 + r_1^{(n)}) \\\vdots & & \vdots & \vdots \\\Delta_{Q,n-m}^{(s, 0)} ( 1 + r_m^{(n)}) & \cdots & \Delta_{Q,n-m}^{(s, m-1)} ( 1 + r_m^{(n)}) & \Delta_{Q,n-1-m}^{(s, m+1)} ( 1 + r_m^{(n)}) \vspace{3mm}\\ \Delta_{P,n-m}^{(s, 0)} ( x_i^{(n)}) & \cdots & \Delta_{P,n-m}^{(s; m-1)} ( x_i^{(n)}) & \Delta_{P,n-1-m}^{(s; m+1)} ( x_i^{(n)})\end{array} \right| .\end{align*}
We finish the proof by using Propositions \ref{jacobi-asymp}, \ref{der-asymp}, \ref{asymp-Q} and \ref{asymp-diff-Q}.
\end{proof}
Combining Propositions \ref{AC-matrix} and \ref{B-matrix}, we obtain
\begin{cor}\label{corollary}
In the regime \eqref{case-II}, we have \begin{align*} & \lim_{n \to \infty} n^{m(m+1) - 2(m+1)s +1} \Big[ q_{\kappa_n}^{(n)}(x_1^{(n)}) q_{\kappa_n-1}^{(n)}(x_2^{(n)}) - q_{\kappa_n}^{(n)}(x_2^{(n)}) q_{\kappa_n-1}^{(n)}(x_1^{(n)}) \Big] \\ & = 2^{2(m+1)s} (v_1 \cdots v_m)^{-s} z_1^{-\frac{s}{2}}z_2^{-\frac{s}{2}} \left| \begin{array}{cc} A_{II}^{(s, v)}(\kappa, z_1) & - B_{II}^{(s,v)}(\kappa, z_1)\vspace{2mm}\\ A_{II}^{(s, v)}(\kappa, z_2) & - B_{II}^{(s,v)}(\kappa, z_2) \end{array} \right|.\end{align*}
\end{cor}
\begin{proof} We first write $q_{\kappa_n}^{(n)}(x_1^{(n)}) q_{\kappa_n-1}^{(n)}(x_2^{(n)}) - q_{\kappa_n}^{(n)}(x_2^{(n)}) q_{\kappa_n-1}^{(n)}(x_1^{(n)})$ as \begin{align*} \left| \begin{array}{cc} q_{\kappa_n}^{(n)}(x_1^{(n)})& q_{\kappa_n-1}^{(n)}(x_1^{(n)}) \vspace{3mm} \\ q_{\kappa_n}^{(n)}(x_2^{(n)})& q_{\kappa_n-1}^{(n)}(x_2^{(n)}) \end{array} \right| = \left| \begin{array}{cc} q_{\kappa_n}^{(n)}(x_1^{(n)})& q_{\kappa_n-1}^{(n)}(x_1^{(n)}) - q_{\kappa_n}^{(n)}(x_1^{(n)}) \vspace{3mm} \\ q_{\kappa_n}^{(n)}(x_2^{(n)})& q_{\kappa_n-1}^{(n)}(x_2^{(n)}) - q_{\kappa_n}^{(n)}(x_2^{(n)}) \end{array} \right|. \end{align*} The corollary now follows from Propositions \ref{AC-matrix} and \ref{B-matrix}.
\end{proof}
\begin{thm}\label{thm-case2-1}
In the regime \eqref{case-II}, we obtain the scaling limit
\begin{align*} & \Pi_\infty^{(s, v)} (z_1, z_2) : = \lim_{n \to \infty} \widetilde{\Pi}_n^{(n)} (z_1, z_2) \\ =& \frac{ A_{II}^{(s,v)}(1, z_1) B_{II}^{(s,v)}(1, z_2) - A_{II}^{(s,v)}(1, z_2) B_{II}^{(s,v)}(1, z_1)}{ 2 \prod_{i=1}^m \sqrt{(v_i + z_1) ( v_i + z_2)} \cdot \big[ C_{II}^{(s,v)}(1)\big]^2 \cdot (z_1- z_2)}. \end{align*}
\end{thm}
\begin{proof}
By \eqref{sigma-2} and Proposition \ref{AC-matrix}, Corollary \ref{corollary}, we have \begin{align*} & \lim_{n \to \infty} \Sigma_n( z_1, z_2) \\ = & \frac{(z_1z_2)^{-\frac{s}{2}} \Big\{A_{II}^{(s,v)}(1, z_1) B_{II}^{(s,v)}(1, z_2) - A_{II}^{(s,v)}(1, z_2) B_{II}^{(s,v)}(1, z_1)\Big\} }{2 \big[C_{II}^{(s,v)} (1)\big]^2 ( z_1 - z_2) }.\end{align*} Combining this with \eqref{CD-2}, we get the desired result.
\end{proof}
\begin{prop}
Let $s > m - 1, s \notin \N$. The kernel $\Pi_\infty^{(s, v)}(z_1, z_2) $ has the following integral form: \begin{align*} & \Pi_\infty^{(s, v)}(z_1, z_2) \\ = & \frac{1}{2 \prod_{i = 1}^m \sqrt{(v_i + z_1) (v_i + z_2)}} \int_{0}^1 \frac{ A_{II}^{(s,v)}(\kappa, z_1) \cdot A_{II}^{(s,v)}(\kappa, z_2) }{\big[ C_{II}^{(s,v)}(\kappa)\big]^2} \kappa d \kappa.\end{align*}
\end{prop}
\begin{proof}
The proof is similar to that of Theorem \ref{integral-form-1}, a slight difference is, instead of using Lemma \ref{lem-case1}, we shall use the following Lemma \ref{lem-case2}.
\end{proof}
\begin{lem}\label{lem-case2}
Let $s > m - 1, s \notin \N$. For any $z_1, z_2 > 0$, we have $$\lim_{ \varepsilon \to 0^{+}} \frac{ A_{II}^{(s,v)}(\varepsilon, z_1) B_{II}^{(s,v)}(\varepsilon, z_2) - A_{II}^{(s,v)}(\varepsilon, z_2) B_{II}^{(s,v)}(\varepsilon, z_1)}{ \big[ C_{II}^{(s,v)}(\varepsilon)\big]^2 } \cdot \varepsilon = 0.$$
\end{lem}
\begin{proof}
Recall that $C_{II}^{(s, v)} (\varepsilon) = W\Big(K_{s, v_1}, \cdots, K_{s, v_m}(\varepsilon) \Big)(\varepsilon). $ By McDonald definition of $K_s(z)$, namely $$K_s(z) = \frac{\pi}{2} \frac{I_{-s} (z) - I_s(z)}{\sin s \pi}, \quad I_s(z) = \sum_{\nu = 0}^\infty \frac{(\frac{1}{2} z)^{2\nu + s }}{\nu ! \Gamma(\nu + s + 1)}, $$ we can write $$K_{s, v_i} (\varepsilon) = K_s(\varepsilon \sqrt{v_i} ) = \frac{\pi}{2 \sin( s \pi)}\varepsilon^{- s} \Big( \underbrace{ \sum_{ \nu = 0}^\infty \frac{\alpha^{(i) }_\nu}{\nu! } \varepsilon^{2\nu} - \varepsilon^{2 s}\sum_{\nu = 0}^\infty \frac{\beta_{\nu}^{(i)}}{\nu ! } \varepsilon^{2 \nu }}_{: = \mathscr{A}_i (\varepsilon^2) } \Big), $$ where $$ \alpha_\nu^{(i)} = \frac{ (\frac{1}{2} \sqrt{v_i})^{2\nu - s}}{\Gamma(\nu - s +1)}, \quad \beta_\nu^{(i)} = \frac{ (\frac{1}{2} \sqrt{v_i})^{2\nu + s}}{\Gamma(\nu + s +1)}. $$ Thus \begin{align*} C_{II}^{(s,v)}(\varepsilon) & = \text{Const}_s \times \varepsilon^{- m s} W\Big(\mathscr{A}_1(x^2), \cdots, \mathscr{A}_m(x^2)\Big) (\varepsilon) \\ & = \text{Const}_s \times \varepsilon^{-ms + \frac{m(m-1)}{2} } W\Big(\mathscr{A}_1, \cdots, \mathscr{A}_m \Big) (\varepsilon^2). \end{align*} By the assumption $s > m -1$, we know that $\mathscr{A}_i$ are all differentiable up to order at least $m-1$ on the neighbourhood of $0$, hence $ W(\mathscr{A}_1, \cdots, \mathscr{A}_m ) (\varepsilon)$ is continuous, and $$ W\Big(\mathscr{A}_1, \cdots, \mathscr{A}_m \Big) (0) = \text{non-zero term} \times \left| \begin{array}{cccc} 1 & v_1 & \cdots & v_1^{m-1} \\ \vdots & \vdots & & \vdots \\ 1 & v_m & \cdots & v_m^{m-1} \end{array} \right| \ne 0 . $$ Hence $$ C_{II}^{(s, v)} (\varepsilon) \asymp \varepsilon^{-ms + \frac{m (m-1)}{2} }, \text{ as } \varepsilon \to 0. $$ In a similar way, we can show that $$A_{II}^{(s,v)} (\varepsilon, z_i) \asymp \varepsilon^{- (1+m) s + \frac{m(m+1)}{2} + 2[s - (m-1)]} , \text{ as } \varepsilon \to 0. $$ $$B_{II}^{(s,v)} (\varepsilon, z_i) \asymp \varepsilon^{- (1+m) s + \frac{m(m+1)}{2} + 2(s - m)} , \text{ as } \varepsilon \to 0.$$ Hence $$\left| \frac{ A_{II}^{(s,v)}(\varepsilon, z_1) B_{II}^{(s,v)}(\varepsilon, z_2) - A_{II}^{(s,v)}(\varepsilon, z_2) B_{II}^{(s,v)}(\varepsilon, z_1)}{ \big[ C_{II}^{(s,v)}(\varepsilon)\big]^2 } \varepsilon \right| \lesssim \varepsilon^{ 2s -2m +3}. $$ Since $2s - 2m + 3 > 0$, the lemma is completely proved.
\end{proof}
{\flushleft \bf Remark.} Let us consider the case where $-1 < s < 0$ and $m = 1$. Let us denote \begin{align} \label{I-formula} \begin{split} & \mathscr{I}^{(s,v)} (\kappa, z_1, z_2) \\ & = \frac{ \left| \begin{array}{cc} K_s(\kappa \sqrt{v}) & \sqrt{v} K_s'(\kappa \sqrt{v}) \\ J_s(\kappa \sqrt{z_1}) & \sqrt{z_1} J_s'(\kappa \sqrt{z_1}) \end{array}\right| \left| \begin{array}{cc} K_s(\kappa \sqrt{v}) & \sqrt{v} K_s'(\kappa \sqrt{v}) \\ J_s(\kappa \sqrt{z_2}) & \sqrt{z_2} J_s'(\kappa \sqrt{z_2}) \end{array}\right| }{ 2 \sqrt{(v + z_1) (v + z_2)} \cdot \big[ K_s(\kappa \sqrt{v})\big]^2} \cdot \kappa . \end{split} \end{align} For any $\varepsilon$, we divide the following sum into two parts: \begin{align*} & \Pi_n^{(n)} (z_1, z_2) = \frac{1}{2n^2} \sum_{j = 0}^{n-1} \frac{ q_j(x_1^{(n)}) q_j(x_2^{(n)})}{h_j^{(s, r^{(n)} ) }} \sqrt{w_s^{(r^{(n)})} (x_1^{(n)}) w_s^{(r^{(n)})} (x_2^{(n)})} \\ & = \underbrace{ \frac{1}{2n^2} \sum_{j = 0}^{\lfloor n \varepsilon \rfloor - 1} \cdots}_{: = \mathscr{S}_n^{(1)} (\varepsilon, z_1, z_2) } + \underbrace{\frac{1}{2n^2} \sum_{j = \lfloor n \varepsilon \rfloor }^{n-1} \cdots}_{: = \mathscr{S}_n^{(2)} (\varepsilon, z_1, z_2) } .\end{align*} From the previous propositions and Theorem \ref{thm-case2-1}, we know that the following limits all exist $$\lim_{n \to \infty} \mathscr{S}_n^{(1)} (\varepsilon, z_1, z_2), \quad \lim_{n \to \infty} \mathscr{S}_n^{(2)} (\varepsilon, z_1, z_2), \quad \lim_{n \to \infty} \Pi_n^{(n)} (z_1, z_2) . $$ By denoting $$ \mathscr{S}^{(1)}_\infty (\varepsilon, z_1, z_2) = \lim_{n \to \infty} \mathscr{S}_n^{(1)} (\varepsilon, z_1, z_2) , \mathscr{S}^{(2)}_\infty (\varepsilon, z_1, z_2) = \lim_{n \to \infty} \mathscr{S}_n^{(2)} (\varepsilon, z_1, z_2) ,$$ we have for any $\varepsilon > 0$, \begin{align}\label{decompose} \Pi_\infty^{(s, v)} (z_1, z_2) = \mathscr{S}^{(1)}_\infty (\varepsilon, z_1, z_2) + \mathscr{S}^{(2)}_\infty (\varepsilon, z_1, z_2 ) . \end{align} If $z_1 = z_2$, then every term is positive, hence \begin{align*} & \Pi_\infty^{(s, v)} (z_1, z_1)\ge \mathscr{S}^{(2)}_\infty (\varepsilon, z_1, z_1) = \int_\varepsilon^{1} \mathscr{I}^{(s, v)} (\kappa, z_1, z_1) d\kappa. \end{align*} By Cauchy-Schwarz inequality, we can show that $$ | \mathscr{I}^{(s, v)} (\kappa, z_1, z_2)|^2 \le \mathscr{I}^{(s, v)} (\kappa, z_1, z_1) \cdot \mathscr{I}^{(s, v)} (\kappa, z_2, z_2). $$ Again by Cauchy-Schwarz inequality, we see that $\kappa \to \mathscr{I}^{(s, v)} (\kappa, z_1, z_2)$ is integrable on $(0, 1)$. Combining this fact with \eqref{decompose}, we see that the limit $ \lim_{\varepsilon \to 0+} \mathscr{S}^{(1)}_\infty (\varepsilon, z_1, z_2) $ always exists. Let us denote this limit by $ \mathscr{S}_\infty^{(1)} (0, z_1, z_2)$.
Now we show that $\mathscr{S}_\infty^{(1)} (0, z_1, z_2) $ is not identically zero. Let $z_1 = z_2$, then for any $\varepsilon > 0$, we have \begin{align*} & \mathscr{S}^{(1)}_n (\varepsilon, z_1, z_1) \ge \frac{1}{2n^2} \frac{\big\{ q_0(x_1^{(n)}) \big\} ^2}{h_0^{(s, r^{(n)} ) }} w_s^{(r^{(n)})} (x_1^{(n)}) \\ = & \frac{1}{2n^2} \frac{1}{\int_{-1}^{1} w_s^{(r^{(n)})} (t) dt } \frac{ ( 1 - x_1^{(n)} )^s }{ 1 + r^{(n)} - x_1^{(n)}} \\ = & \frac{ z_1^s}{v + z_1 } \frac{1}{ (2n^2)^s \int_{-1}^1 w_s^{(r^{(n)}) } (x) dx }. \end{align*} We have \begin{align*} & (2n^2)^s \int_{-1}^1 w_s^{(r^{(n)}) } (x) dx = \int_0^{4n^2} \frac{t^s}{ v + t} d t \\ & \xrightarrow{n \to \infty} \int_0^\infty \frac{t^s}{ v + t} d t = v^s \Gamma(-s) \Gamma(s + 1). \end{align*} Hence $$ \mathscr{S}_\infty^{(1)}(0, z_1, z_1) \ge \frac{1}{ v^s \Gamma(-s) \Gamma(s +1) }\cdot \frac{z_1^s}{ v + z_1} \ne 0. $$
\begin{defn} For $-1< s < 0$, define a positive function on $\R^*_{+}$: $$\mathscr{N}^{(s, v)} (z) : = \frac{1}{ (v^s \Gamma(-s) \Gamma(s +1) )^{1/2}}\sqrt{\frac{z^s}{ v + z}}. $$
\end{defn}
\begin{prop}\label{one-rank-per}
For $m = 1$ and $ -1 < s < 0$, we have $$\Pi_\infty^{(s, v)} (z_1, z_2) = \mathscr{N}^{(s,v)}(z_1) \mathscr{N}^{(s,v)}(z_2) + \int_0^1 \mathscr{I}^{(s, v)} (\kappa, z_1, z_2) d \kappa .$$\end{prop}
\begin{proof}
By \eqref{decompose}, it suffices to show that $$\mathscr{S}_\infty^{(1)} (0, z_1, z_2) = \mathscr{N}^{(s,v)}(z_1) \mathscr{N}^{(s,v)}(z_2). $$ By similar arguments in the proof of Theorem \ref{thm-case2-1}, $ \mathscr{S}_\infty^{(1)} (\varepsilon, z_1, z_2) $ is given by the formula \begin{align}\label{partial-sum} \mathscr{S}_\infty^{(1)} (\varepsilon, z_1, z_2) = \varepsilon \cdot \frac{A_{II}^{(s,v)} (\varepsilon, z_1) B_{II}^{(s,v)} (\varepsilon, z_2) - A_{II}^{(s,v)} (\varepsilon, z_2) B_{II}^{(s,v)} (\varepsilon, z_1) }{ 2\sqrt{(v + z_1) (v+z_2)} \big[C_{II}^{(s, v)} (\varepsilon) \big]^2 (z_1- z_2) } . \end{align} For $m = 1$, we have $$A_{II}^{(s,v)} (\varepsilon, z_i) = \left |\begin{array}{cc} K_s(\varepsilon \sqrt{v}) & \sqrt{v} K_s'( \varepsilon \sqrt{v}) \vspace{3mm} \\ J_s(\varepsilon \sqrt{z_i}) & \sqrt{z_i} J_s' ( \varepsilon \sqrt{z_i}) \end{array} \right|,$$ and $B_{II}^{(s, v)} (\varepsilon, z_i) = \frac{\partial}{\partial \varepsilon} A_{II}^{(s,v)} (\varepsilon, z_i)$, $C_{II}^{(s,v)}(\varepsilon) = K_s(\varepsilon \sqrt{v})$. By the differential formula $(\frac{f}{g})' = \frac{f' g - fg' }{g^2} = \frac{1}{g^2} \left| \begin{array}{cc} g & g' \\ f & f' \end{array} \right|$, we have \begin{align*} A_{II}^{(s,v)}(\varepsilon, z_i ) = [K_s(\varepsilon \sqrt{v})]^2 \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_i} )}{ K_s(\varepsilon \sqrt{v}) }\right), \end{align*} and \begin{align*} & A_{II}^{(s,v)} (\varepsilon, z_1) B_{II}^{(s,v)} (\varepsilon, z_2) - A_{II}^{(s,v)} (\varepsilon, z_2) B_{II}^{(s,v)} (\varepsilon, z_1) \\ = & [A_{II}^{(s,v)} (\varepsilon, z_1)]^2 \frac{\partial }{\partial \varepsilon} \left(\frac{A_{II}^{(s,v)}(\varepsilon, z_2)}{A_{II}^{(s,v)} (\varepsilon, z_1) }\right). \end{align*} Hence \begin{align}\label{iterate-diff} \begin{split} & \frac{A_{II}^{(s,v)} (\varepsilon, z_1) B_{II}^{(s,v)} (\varepsilon, z_2) - A_{II}^{(s,v)} (\varepsilon, z_2) B_{II}^{(s,v)} (\varepsilon, z_1) }{ \big[K_s (\varepsilon \sqrt{v}) \big]^2 } \\ & = \left(K_s(\varepsilon \sqrt{v}) \right)^2 \left\{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_1} )}{ K_s(\varepsilon \sqrt{v}) }\right) \right\}^2 \frac{\partial }{ \partial \varepsilon } \left\{ \frac{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_2} )}{ K_s(\varepsilon \sqrt{v}) }\right)}{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_1} )}{ K_s(\varepsilon \sqrt{v}) }\right) } \right\}. \end{split} \end{align} As $\varepsilon\to 0+$, we have \begin{align} \label{iterate1}\left(K_s(\varepsilon \sqrt{v}) \right)^2 \sim \left(\frac{\pi}{2 \sin (s\pi) } \frac{ (\frac{\sqrt{v}}{2})^s }{\Gamma(s + 1)}\right)^2 \varepsilon^{2s}, \end{align} \begin{align} \label{iterate2} \left\{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_1} )}{ K_s(\varepsilon \sqrt{v}) }\right) \right\}^2 \sim \left(\frac{2 \Gamma(s+1) (\frac{\sqrt{z_1}}{2})^s}{\frac{\pi}{ 2 \sin (s \pi) } \Gamma(-s) (\frac{\sqrt{v}}{2})^{3s}}\right)^2 \varepsilon^{-4s -2} , \end{align} \begin{align} \label{iterate3} \frac{\partial}{\partial \varepsilon} \left\{ \frac{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_2} )}{ K_s(\varepsilon \sqrt{v}) }\right)}{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_1} )}{ K_s(\varepsilon \sqrt{v}) }\right) } \right\} \sim \left(\frac{\sqrt{z_2}}{\sqrt{z_1}}\right)^{s} \frac{\Gamma(-s)}{\Gamma(s+1)} \left(\frac{\sqrt{v}}{2}\right)^{2s} \frac{z_1- z_2}{2} \varepsilon^{2s+1}. \end{align}
For example, let us check the asymptotic formula \eqref{iterate3}. We have $$ \frac{J_s(\varepsilon \sqrt{z_i} )}{ K_s(\varepsilon \sqrt{v}) } = \frac{2 \sin (s\pi) }{\pi} \left(\frac{\sqrt{z_i}}{2}\right)^s \frac{\mathscr{F}(\varepsilon^2 z_i)}{\mathscr{G}(\varepsilon^2) - \varepsilon^{-2s} \mathscr{H}(\varepsilon^2) }, $$ where $\mathscr{F}, \mathscr{G}, \mathscr{H}$ are entire functions given by $$\mathscr{F}(z)= \sum_{\nu=0}^\infty \frac{(-1)^{\nu} (\frac{z}{4})^\nu}{\nu ! \Gamma(\nu + s + 1)}, \quad \mathscr{G}(z)= \left(\frac{\sqrt{v}}{2}\right)^s\sum_{\nu=0}^\infty \frac{ (\frac{z}{4})^\nu}{\nu ! \Gamma(\nu + s + 1)} , $$ $$ \mathscr{H}(z)= \left(\frac{\sqrt{v}}{2}\right)^{-s}\sum_{\nu=0}^\infty \frac{ (\frac{z}{4})^\nu}{\nu ! \Gamma(\nu - s + 1)} . $$ It follows that \begin{align*} \frac{\partial}{\partial \varepsilon}\left\{ \frac{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_2} )}{ K_s(\varepsilon \sqrt{v}) }\right)}{ \frac{\partial }{ \partial \varepsilon } \left( \frac{J_s(\varepsilon \sqrt{z_1} )}{ K_s(\varepsilon \sqrt{v}) }\right) }\right\} = \left(\frac{\sqrt{z_2}}{\sqrt{z_1}}\right)^{s} 2\varepsilon \cdot \underbrace{ \frac{\partial }{\partial x } \left[ \frac{ \frac{\partial }{\partial x} \left( \frac{\mathscr{F}(xz_2)}{ \mathscr{G} (x) - x^{-s} \mathscr{H}(x)}\right) }{ \frac{\partial }{\partial x} \left( \frac{\mathscr{F}(xz_1)}{ \mathscr{G} (x) - x^{-s} \mathscr{H}(x)}\right) }\right](\varepsilon^2)}_{= : \, Q(\varepsilon^2)} . \end{align*} For $i = 1, 2$, let us denote \begin{align*} Q_i(x) = & z_i\mathscr{F}'(xz_i) \Big(x^{s+1}\mathscr{G}(x) - x \mathscr{H}(x)\Big) \\ & - \mathscr{F}(xz_i) \Big( x^{s+1} \mathscr{G}'(x) + s \mathscr{H}(x) - x \mathscr{H}'(x) \Big) , \end{align*} then $Q(x) = \frac{\partial}{ \partial x } \left[ \frac{ Q_2(x) }{Q_1(x) }\right].$ Note that $Q_1(0) = Q_2(0) = -s \mathscr{F}(0) \mathscr{H}(0)$. Now we obtain that, as $x \to 0 + $, \begin{align*} Q (x) \sim & \frac{(s+1) x^s}{Q_1(0)^2} \cdot \Big\{ Q_2(0) \big( z_2 \mathscr{F}'(0) \mathscr{G}(0) - \mathscr{F}(0) \mathscr{G}'(0)\big) \\ & - Q_1(0) \big( z_1 \mathscr{F}'(0) \mathscr{G}(0) - \mathscr{F}(0) \mathscr{G}'(0)\big) \Big\} \\ \sim & \frac{(s+1) x^s}{-s \mathscr{F}(0) \mathscr{H}(0)} \mathscr{F}'(0) \mathscr{G}(0) (z_2-z_1) \\ \sim & \frac{\Gamma(-s)}{\Gamma(s+1)} \left(\frac{\sqrt{v}}{2}\right)^{2s} \frac{z_1- z_2}{4} x^s.\end{align*} Combining the above asymptotics, we get \eqref{iterate3}.
Substituting \eqref{iterate1}, \eqref{iterate2} and \eqref{iterate3} to \eqref{iterate-diff}, we have \begin{align*} & \frac{A_{II}^{(s,v)} (\varepsilon, z_1) B_{II}^{(s,v)} (\varepsilon, z_2) - A_{II}^{(s,v)} (\varepsilon, z_2) B_{II}^{(s,v)} (\varepsilon, z_1) }{ \big[K_s (\varepsilon \sqrt{v}) \big]^2 } \\ \sim & \frac{2(z_1 -z_2)}{\Gamma(-s) \Gamma(s+1)} \left(\frac{\sqrt{z_1z_2}}{ v}\right)^s \varepsilon^{-1}, \text{ as } \varepsilon \to 0+. \end{align*} Finally, by \eqref{partial-sum}, we get the formula for $ \mathscr{S}_\infty^{(1)} (0, z_1, z_2) $: \begin{align*} \frac{1}{\Gamma(-s) \Gamma(s + 1)} \frac{1}{ \sqrt{(v+z_1) (v+z_2)}} \left(\frac{\sqrt{z_1z_2}}{v}\right)^s = \mathscr{N}^{(s,v)}(z_1) \mathscr{N}^{(s,v)}(z_2). \end{align*}
\end{proof}
For $\alpha > -1$, we denote by $\widetilde{J}^{(\alpha)} (x, y)$ the Bessel kernel, i.e., $$\widetilde{J}^{(\alpha)}(x, y) = \frac{J_\alpha(\sqrt{x}) \sqrt{y} J_\alpha'(\sqrt{y}) - J_\alpha(\sqrt{y}) \sqrt{x} J_\alpha'(\sqrt{x})}{2(x-y)} .$$ It is well-known (cf. e.g. \cite{Tracy-Widom94}) that the Bessel kernel has the following integral representation: $$\widetilde{J}^{(\alpha)} (x, y) =\frac{1}{4} \int_0^1 J_\alpha(\sqrt{t x}) J_\alpha(\sqrt{ty}) dt .$$
\begin{prop}
Let $m = 1$ and $ - 1 < s < 0$. Then $$\lim_{ v \to 0+} \Pi_\infty^{(s,v)} (z_1, z_2) =\widetilde{J}^{(s +1)} (z_1, z_2).$$ Moreover, the convergence is uniform as soon as $z_1, z_2$ are in compact subsets of $(0, \infty)$.
\end{prop}
\begin{proof}
Fix $-1< s < 0$. It is easy to see that $$\lim_{v \to 0+} \mathscr{N}^{(s,v)} (z) = 0, $$ and the convergence is uniform for $ z$ in compact subset of $ (0, \infty)$.
By \eqref{differential-relation-J} and \eqref{differential-relation-K}, we have $$ A_{II}^{(s,v)}(\kappa, z_i) = \left |\begin{array}{cc} K_s(\kappa \sqrt{v}) & - \sqrt{v} K_{s+1}( \kappa \sqrt{v}) \vspace{3mm} \\ J_s(\kappa \sqrt{z_i}) & - \sqrt{z_i}J_{s+1} ( \kappa \sqrt{z_i}) \end{array} \right|. $$ Then apply the asymptotics of $K_s$, $K_{s +1}$ near 0 to get \begin{align*} & \lim_{v \to \infty} \frac{A_{II}^{(s,v)} (\kappa, z_i)}{K_s(\kappa \sqrt{v})} = - \sqrt{z_i} J_{s + 1} ( \kappa \sqrt{z_i}).\end{align*} It follows that $$\lim_{v \to 0 + } \mathscr{I}^{(s,v)} (\kappa, z_1, z_2) = \frac{J_{s + 1}(\kappa \sqrt{z_1}) J_{ s + 1} ( \kappa \sqrt{z_2})}{2 } \cdot \kappa.$$ For any $0 < \varepsilon < 1$, the convergence is uniform as long as $\kappa\in [\varepsilon, 1]$ and $z_1, z_2$ in compact subsets of $(0, \infty)$. Hence $$\lim_{v \to 0+ } \int_\varepsilon^1 \mathscr{I}^{(s,v)} (\kappa, z_1, z_2) d \kappa = \int_{\varepsilon}^1 \frac{J_{s + 1}(\kappa \sqrt{z_1}) J_{ s + 1} ( \kappa \sqrt{z_2})}{2} \cdot \kappa d \kappa. $$ The above term tends to $$\int_0^1 \frac{J_{s + 1}(\kappa \sqrt{z_1}) J_{ s + 1} ( \kappa \sqrt{z_2})}{2} \cdot \kappa d \kappa = \frac{1}{4 } \int_0^1 J_{s + 1}(\sqrt{t z_1}) J_{ s + 1} (\sqrt{tz_2} ) dt $$ uniformly as $z_1, z_2$ in compact subsets of $(0, \infty)$, when $\varepsilon \to 0$. It is easy to see that $$\sup_{0< v< R,\\ r < |z_1|, |z_2| < R }\left| \int_0^\varepsilon \mathscr{I}^{(s,v)} (\kappa, z_1, z_2) d\kappa \right| \lesssim \varepsilon^{s +1}.$$ Hence $$\lim_{ v \to 0+} \Pi_\infty^{(s,v)} (z_1, z_2) =\widetilde{J}^{(s +1)} (z_1, z_2),$$ with uniform convergence as long as $z_1, z_2$ are in compact subsets of $(0, \infty)$.
\end{proof}
\begin{rem}
When $m \ge 2$ and $ -1< s < m-1$, the situation is similar, but the formula and the proof will be slightly tedious.
\end{rem}
\subsection{Explicit Kernels for Scaling Limit: Case III} Let $s> -1$. We consider in this section a sequence of positive real numbers $\gamma^{(n)}$ and modify the Jacobi weights given by $$\widehat{w}_{s}^{(n)} (t) = \frac{w_{s}(t)}{( 1 + \gamma^{(n)} - t)^2} = \frac{(1 - t)^{s}}{(1 + \gamma^{(n)}- t)^2}. $$ The $n$-th Christoffel-Darboux kernel associated with $\widehat{w}_{ s}^{(n)}$ is denoted by $\Phi_n^{(n)} (x_1, x_2)$. We will investigate the scaling limit of $\Phi_n^{(n)} ( x_1^{(n)}, x_2^{(n)})$ in the regime: \begin{align}\label{case3} \begin{split} x_i^{(n)} = 1 - \frac{z_i}{2n^2} \, & \text{ with } z_i > 0 , i = 1, 2. \\ \gamma^{(n)} = \frac{u}{2n^2} \, & \text{ with } u > 0. \end{split} \end{align}
\subsubsection{Explicit formulae for orthogonal polynomials and Christoffel-Darboux kernels}
For $j \ge 2$, we set $$p_j^{(n)} (t): = \left|\begin{array}{ccc}Q_{j-2}^{(s)} (1 + \gamma^{(n)}) & Q_{j-1}^{(s)} (1 + \gamma^{(n)}) & Q_{j}^{(s)} (1 + \gamma^{(n)}) \vspace{3mm} \\R_{j-2}^{(s)} (1 + \gamma^{(n)}) & R_{j-1}^{(s)} (1 + \gamma^{(n)}) & R_{j}^{(s)} (1 + \gamma^{(n)}) \vspace{3mm}\\P_{j-2}^{(s)} (t) & P_{j-1}^{(s)} (t) & P_{j}^{(s)} (t)\end{array}\right|; $$ $$ e_{j}^{(n)} : = \left|\begin{array}{cc}Q_{j-2}^{(s)} (1 + \gamma^{(n)}) & Q_{j-1}^{(s)} (1 + \gamma^{(n)}) \vspace{3mm} \\R_{j-2}^{(s)} (1 + \gamma^{(n)}) & R_{j-1}^{(s)} (1 + \gamma^{(n)})\end{array}\right|.$$ The leading term of $p_j^{(n)}$ is $\widehat{k}_j^{(n)} = e_j^{(n)} k_j^{(s)}.$
\begin{prop}
For $j \ge 2$, the polynomial $q_j^{(n )}$ is the $j$-th orthogonal polynomial with respect to the weight $\widehat{w}^{(n)}_s$ on $ [-1, 1]$.
\end{prop}
\begin{proof}
By the Uvarov formula, we know that for $ j \ge 1$, $$\widehat{p}_j^{(n)} (t) = \left| \begin{array}{cc} Q_{j-1}^{(s)} (1 + \gamma^{(n)} ) & Q_{j}^{(s)} (1 + \gamma^{(n)} ) \vspace{3mm} \\ P_{j-1}^{(s)} (t ) & P_{j}^{(s)} (t ) \end{array} \right|$$ is the $j$-th orthogonal polynomial with respect to the weight $\frac{w_s(t) }{ 1 + \gamma^{(n)} - t }$. Applying the Uvarov formula again, we know that for $j \ge 2$, \begin{align}\label{iterated-uvarov} \left| \begin{array}{cc} \mathscr{C}_{ j - 1}(1 + \gamma^{(n)}) & \mathscr{C}_j(1 + \gamma^{(n)}) \vspace{2mm} \\ \widehat{p}_{j-1}^{(n)} (t ) & \widehat{p}_j^{(n)} (t) \end{array}\right| \end{align} is the $j$-th orthogonal polynomial with respect to the weight $\frac{w_s(t) }{( 1 + \gamma^{(n)} - t)^2}$, where we denote by $$\mathscr{C}_j(x) =\int_{-1}^1 \frac{\widehat{p}_j^{(n)} (t) }{x - t} \cdot \frac{w_s(t)}{ 1 + \gamma^{(n)} - t}dt . $$ We can easily verify that the polynomial \eqref{iterated-uvarov} is a multiple of $p_j^{(n)}$. \end{proof}
\begin{defn}
For $j \ge 2 $, denote $$ \widehat{h}_{ j}^{(n) } = \int_{-1}^{1} \big[ p_j^{(n) } (t) \big]^2 \frac{(1 - t)^{s}}{(1 + \gamma^{(n)} - t)^2} dt. $$
\end{defn}
\begin{prop}
For $ j \ge 2$, we have the identity \begin{eqnarray*} \widehat{h}_{j}^{(n) } = \frac{e_{ j}^{(n)} e_{ j + 1}^{(n)} k_{j }^{(s)} h_{j-2}^{(s )}}{k_{j-2}^{(s)}} .\end{eqnarray*}
\end{prop}
\begin{proof}
By the orthogonality property, we have \begin{align*} \widehat{h}_j^{(n) } & = \int_{-1}^1 p_j^{(n)}(t) e_{ j}^{(n)} P_{j }^{(s)} (t) \frac{(1 - t)^s}{(1 + \gamma^{(n)} - t)^2} dt \\ & = e_{ j}^{(n)} k_{j }^{(s)} \int_{-1}^1 p_j^{(n)}(t) \cdot t^j \cdot \frac{(1 - t)^{s}}{(1 + \gamma^{(n)} - t)^2} dt \\ & = e_{j}^{(n)} k_{j }^{(s)} \int_{-1}^1 p_j^{(n)}(t) \cdot t^{j-2} \cdot ( 1 + \gamma^{(n)} - t)^2 \cdot \frac{(1 - t)^{s}}{(1 + \gamma^{(n)} - t)^2} dt \\ & = e_{ j}^{(n)} k_{j }^{(s)} \int_{-1}^1 p_j^{(n)}(t) \cdot t^{j-2} \cdot w_{s} (t) dt \\ & = e_{j}^{(n)} k_{j }^{(s )} \int_{-1}^1 e_{j + 1}^{(n)} P_{j-2}^{(s)}(t) \cdot t^{j-2} \cdot w_{s } (t) dt \\ & = \frac{e_{j}^{(n)} e_{j + 1}^{(n)} k_{j }^{(s )} }{k_{j-2}^{(s )}} \int_{-1}^1 \Big[ P_{j-2}^{(s )}(t) \Big]^2 w_{s } (t) dt \\ & = \frac{e_{j}^{(n)} e_{ j + 1}^{(n)} k_{j }^{(s)} }{k_{j-2}^{(s)}} h_{j-2}^{(s )}. \end{align*}
\end{proof}
The Christoffel-Darboux kernels $\Phi_n^{(n)}$ in the $(x_1^{(n)}, x_2^{(n)})$-coodinates are given by \begin{align*} & \Phi_n^{(n)} ( x_1^{(n)}, x_2^{(n)}) = \sqrt{ \widehat{w}_{s}^{(n)} (x_1^{(n)}) \widehat{w}_{s}^{(n)} (x_2^{(n)}) } \sum_{\ell=0}^{n-1} \frac{p_\ell^{(n)} (x_1^{(n)}) p_\ell^{(n)} (x_2^{(n)}) }{\widehat{h}_\ell^{(n)}} \\ = & \frac{ \sqrt{ \widehat{w}_{s}^{(n)} (x_1^{(n)}) \widehat{w}_{s}^{(n)} (x_2^{(n)}) } }{ \frac{\widehat{h}_{n-1}^{(n)} \widehat{k}_n^{(n)}}{ \widehat{k}_{n - 1}^{(n)}} } \cdot \frac{p_n^{(n)} (x_1^{(n)}) p_{n-1}^{(n)} (x_2^{(n)}) - p_n^{(n)} (x_2^{(n)}) p_{n - 1}^{(n)} (x_1^{(n)}) }{ x_1^{(n)} - x_2^{(n)} }. \end{align*} These kernels in the $(z_1, z_2)$-coodinates are given by $$ \widetilde{\Phi}_n^{(n)}( z_1, z_2) = \frac{1}{2n^2}\Phi_n^{(n)} \Big( 1 - \frac{z_1}{2n^2}, 1 - \frac{z_2}{2n^2}\Big).$$
\subsubsection{Scaling limits}
\begin{prop}\label{matrix-AC-3}
In the regime \eqref{case3}, we have $$\lim_{n \to \infty} n^{-1-2s} e_{\kappa_n}^{(n)} = 2^{2s+\frac{3}{2}} u^{-1-s} C_{ III}^{(s ,u )}(\kappa); $$ $$\lim_{n \to\infty} n^{1-3s} p_{\kappa_n}^{(n )} ( x_i^{(n)}) = 2^{3s+\frac{3}{2}} u^{-1-s} z^{-\frac{s}{2}} A_{ III}^{(s , u)}(\kappa, z_i),$$ where $$C_{ III}^{(s , u)}(\kappa) = \left| \begin{array}{cc}K_{s} (\kappa \sqrt{u}) & u^{\frac{1}{2}}K_{s}'(\kappa \sqrt{u}) \vspace{3mm} \\ L_{s}(\kappa \sqrt{u}) & u^{\frac{1}{2}} L_{s}'(\kappa \sqrt{u})\end{array}\right| $$ and $$ A_{ III}^{(s, u)}(\kappa, z) = \left| \begin{array}{ccc}K_{s}(\kappa \sqrt{u}) & u^{\frac{1}{2}}K_{s}'(\kappa \sqrt{u}) & u K_{s}''(\kappa \sqrt{u}) \vspace{3mm} \\ L_{s}(\kappa \sqrt{u}) & u^{\frac{1}{2}} L_{s}'(\kappa \sqrt{u}) & u L_{s}''(\kappa \sqrt{u}) \vspace{3mm} \\J_{s}(\kappa \sqrt{z}) & z^{\frac{1}{2}}J_{s}'(\kappa \sqrt{z}) & z J_{s}''(\kappa \sqrt{z})\end{array}\right|. $$ Moreover, for any $\varepsilon > 0$, the convergences are uniform as long as $ \kappa \in [\varepsilon, 1 ]$ and $z_i$ ranges compact simple connected subset of $\C\setminus \{0\}$.
\end{prop}
\begin{proof}
The proof is similar to that of Proposition \ref{C}.
\end{proof}
\begin{prop}
In the regime \eqref{case3}, we have $$\lim_{n \to \infty} n^{2-3s} \Big\{p_{\kappa_n}^{(n)} (x_i^{(n)}) - p_{\kappa_n-1}^{(n)} (x_i^{(n)}) \Big\} = 2^{3s + \frac{3}{2}} u^{-1-s} z_i^{-\frac{s}{2}} B_{ III}^{(s, u)} (\kappa, z_i) $$ and \begin{align*} & \lim_{n \to \infty} n^{3- 6s } \Big\{ p_{\kappa_n}^{(n)} (x_1^{(n)}) p_{\kappa_n-1}^{(n)} (x_2^{(n)}) - p_{\kappa_n}^{(n)} (x_2^{(n)}) p_{\kappa_n - 1}^{(n)} (x_1^{(n)}) \Big\} \\ = & 2^{6s + 3} u^{-2 - 2s} (z_1z_2)^{-\frac{ s }{2}} \Big\{ B_{ III}^{(s, u)} ( \kappa, z_1) A_{ III}^{(s , u )} ( \kappa, z_2) - B_{ III}^{(s, u )} ( \kappa, z_2) A_{ III}^{(s, u )} ( \kappa, z_1) \Big\} , \end{align*} where $B_{ III}^{(s, u )} (\kappa , z) = \frac{\partial}{\partial \kappa} A_{ III}^{(s, u)}(\kappa, z)$, i.e., $$ B_{ III}^{(s, u )} (\kappa , z) = \left | \begin{array}{ccc}K_{s}(\kappa \sqrt{u}) & u^{\frac{1}{2}}K_{s}'( \kappa \sqrt{u}) & u^{\frac{3}{2} } K_{s}^{(3)}(\kappa \sqrt{u}) \vspace{3mm} \\ L_{s}( \kappa \sqrt{u}) & u^{\frac{1}{2}} L_{s}'( \kappa \sqrt{u}) & u^{\frac{3}{2}} L_{s}^{(3)}(\kappa \sqrt{u}) \vspace{3mm} \\J_{s} ( \kappa \sqrt{z}) & z^{\frac{1}{2}}J_{s}'( \kappa \sqrt{z}) & z^{\frac{3}{2}} J_{s}^{(3)} (\kappa \sqrt{z})\end{array}\right | .$$ Moreover, for any $\varepsilon > 0$, the convergences are uniform as long as $ \kappa \in [\varepsilon, 1 ]$ and $z_i$ ranges compact simple connected subset of $\C\setminus \{0\}$.
\end{prop}
\begin{proof}
The proof is similar to that of Proposition \ref{B}.
\end{proof}
Now we obtain the following theorem.
\begin{thm}
In the regime \eqref{case3}, we obtain the scaling limit \begin{align*} & \Phi_\infty^{(s, u)} (z_1, z_2) : = \lim_{n \to \infty} \widetilde{\Phi}_n^{(n)} ( z_1, z_2) \\ = & \frac{ A_{ III}^{(s, u)} ( 1, z_1) B_{ III}^{(s , u )} ( 1, z_2) - A_{ III}^{(s, u )} ( 1, z_2) B_{ III}^{(s, u )} ( 1, z_1) }{2 ( u + z_1)(u+z_2) \cdot \big[ C_3^{(s,u)} (1) \big]^2 \cdot (z_1- z_2) } . \end{align*}
\end{thm}
For investigating the integral form of the scaling limit $\Phi_\infty^{(s,u)}$, let us put $$p_0^{(n)} (t) \equiv 1 \text{ and } \, p_1^{(n)} (t) =1- t - \frac{1}{\widehat{h}_0^{(n)}} \int_{-1}^1(1 -t) \widehat{w}_s^{(n)}(t) d t .$$ The contribution of $p_0^{(n)}$ to the kernel is \begin{align*} & \frac{\sqrt{\widehat{w}_s^{(n)}(x_1^{(n)}) \widehat{w}_s^{(n)}(x_2^{(n)}) }}{2n^2} \cdot \frac{p_0^{(n)} (x_1^{(n)}) p_0^{(n)} (x_2^{(n)}) }{\widehat{h}_0^{(n)}} = \frac{ (z_1z_2)^{\frac{s}{2}}}{(z_1 + u)(z_2+ u)} \cdot \frac{(2n^2)^{1-s}}{\int_{-1}^1 \frac{ (1-t)^s }{ ( 1 + \frac{u}{2n^2} - t)^2} dt }.\end{align*} We note that for $ -1< s < 1$, we have \begin{align*} & \frac{\int_{-1}^1 \frac{ (1-t)^s }{ ( 1 + \frac{u}{2n^2} - t)^2} dt }{ (2n^2)^{1-s}} = \int_0^{4n^2} \frac{t^s}{ (t + u)^2} d t\\ \xrightarrow{n \to \infty} & \int_{0}^{\infty} \frac{t^s}{(t + u)^2} dt = u^{s-1} B(1 + s, 1-s) = u^{s-1} \Gamma(1 + s) \Gamma(1-s).\end{align*} The contribution of $p_1^{(n)}$ to the kernel is \begin{align*} & \frac{\sqrt{\widehat{w}_s^{(n)}(x_1^{(n)}) \widehat{w}_s^{(n)}(x_2^{(n)}) }}{2n^2} \cdot \frac{p_1^{(n)} (x_1^{(n)}) p_1^{(n)} (x_2^{(n)}) }{\widehat{h}_1^{(n)}} \\ = & \frac{ (z_1z_2)^{\frac{s}{2}}}{(z_1 + u)(z_2+ u)} \cdot \frac{(2n^2)^{1-s} p_1^{(n)} (x_1^{(n)}) p_1^{(n)} (x_2^{(n)}) }{\widehat{h}_1^{(n)}}. \end{align*} For $ - 1 < s < 0$, we have \begin{align*} \frac{1}{\widehat{h}_0^{(n)}} \int_{-1}^1(1 -t) \widehat{w}_s^{(n)}(t) d t = \frac{ \int_{0}^{4n^2} \frac{ t^{s + 1}}{ ( t + u)^2} dt}{2n^2 \int_{0}^{4n^2} \frac{ t^{s }}{ ( t + u)^2} dt }. \end{align*} Hence \begin{align*} p_1^{(n)} ( x_i^{(n)} ) = \frac{1}{2n^2} \left( z_i - \frac{ \int_{0}^{4n^2} \frac{ t^{s + 1}}{ ( t + u)^2} dt}{ \int_{0}^{4n^2} \frac{ t^{s }}{ ( t + u)^2} dt } \right),\end{align*}
and \begin{align*} \widehat{h}_1^{(n)} = (2n^2)^{-1-s} \int_{0}^{4n^2} \left( y - \frac{ \int_{0}^{4n^2} \frac{ t^{s + 1}}{ ( t+ u)^2} dt}{ \int_{0}^{4n^2} \frac{ t^{s }}{ ( t + u)^2} dt }\right)^2 dy .\end{align*} It follows that \begin{align*} & \lim_{n \to \infty} \frac{(2n^2)^{1-s} p_1^{(n)} (x_1^{(n)}) p_1^{(n)} (x_2^{(n)}) }{\widehat{h}_1^{(n)}} = \frac{ \left( z_1- \frac{ \int_{0}^{\infty} \frac{ t^{s + 1}}{ ( t + u)^2} dt}{ \int_{0}^{\infty} \frac{ t^{s }}{ ( t + u)^2} dt } \right) \left( z_2 - \frac{ \int_{0}^{\infty} \frac{ t^{s + 1}}{ ( t + u)^2} dt}{ \int_{0}^{\infty} \frac{ t^{s }}{ ( t + u)^2} dt } \right)}{ \int_{0}^{\infty} \left( y - \frac{ \int_{0}^{\infty} \frac{ t^{s + 1}}{ ( t+ u)^2} dt}{ \int_{0}^{\infty} \frac{ t^{s }}{ ( t + u)^2} dt }\right)^2 \frac{y^s}{(y + u)^2} dy } \\ & = \frac{ \left( z_1 + \frac{1+s}{s} u \right) \left( z_2 + \frac{1+s}{s} u \right)}{ \int_{0}^{\infty} \left( y + \frac{1+s}{s} u \right)^2 \frac{y^s}{(y + u)^2} dy } .\end{align*}
\begin{defn}
For $ -1< s < 1$, define a positive function on $\R^{*}_{+}$ : $$\mathscr{M}^{(s,u)}_0 (z) : = \frac{1}{\sqrt{u^{s-1} \Gamma( 1 + s) \Gamma)( 1 + s ) }} \cdot \frac{z^{\frac{s}{2}}}{z + u }.$$ For $ -1 < s< 0$, define $$ \mathscr{M}_1^{(s,u) } (z) : = \frac{ 1 }{ \left[ \int_{0}^{\infty} \left( y + \frac{1+s}{s} u \right)^2 \frac{y^s}{(y + u)^2} dy \right]^{1/2}} \cdot (z + \frac{1+s}{s} u ) \cdot\frac{z^{\frac{s}{2}}}{ z + u }, $$ we extend the definition of $ \mathscr{M}_1^{(s,u) }$ when $ 0 \le s < 1$ by setting $ \mathscr{M}_1^{(s,u) } \equiv 0$.
\end{defn}
The detail proof of the following proposition is long but routine and similar to the proof of Proposition \ref{one-rank-per}, so we omit its proof here.
\begin{prop}
For $ -1< s < 1$, we have the following representation of $\Phi_\infty^{(s, u)} ( z_1, z_2)$: \begin{align*} \Phi_\infty^{(s, u)} (z_1, z_2) = & \mathscr{M}_0^{(s,u)} (z_1) \mathscr{M}_0^{(s,u)} (z_2) + \mathscr{M}_1^{(s,u)} (z_1) \mathscr{M}_1^{(s,u)} (z_2) \\ & + \frac{1}{2 ( z_1 + u )( z_2 + u )}\int_0^1 \frac{A_{ III}^{(s,u)} (\kappa, z_1) A_{ III}^{(s,u)} (\kappa, z_2) }{ \big[ C_{ III}^{(s,u) } (\kappa) \big]^2} \kappa d \kappa. \end{align*}
\end{prop}
\def\cprime{$'$} | 0.002762 |
\section{Introduction}
Deep neural networks have achieved great success in many applications like image processing \citep{krizhevsky2012imagenet}, speech recognition \citep{hinton2012deep} and Go games \citep{silver2016mastering}. However, the reason why deep networks work well in these fields remains a mystery for long time. Different lines of research try to understand the mechanism of deep neural networks from different aspects. For example, a series of work tries to understand how the expressive power of deep neural networks are related to their architecture, including the width of each layer and depth of the network \citep{telgarsky2015representation, telgarsky2016benefits, lu2017expressive, liang2016deep, yarotsky2017error, yarotsky2018optimal, hanin2017universal, hanin2017approximating}.
These work shows that multi-layer networks with wide layers can approximate arbitrary continuous function.
In this paper, we mainly focus on the optimization perspective of deep neural networks. It is well known that without any additional assumption, even training a shallow neural network is an NP-hard problem \citep{blum1989training}. Researchers have made various assumptions to get a better theoretical understanding of training neural networks, such as Gaussian input assumption \citep{brutzkus2017sgd, du2017convolutional, zhong2017learning} and independent activation assumption \citep{choromanska2015loss, kawaguchi2016deep}. A recent line of work tries to understand the optimization process of training deep neural networks from two aspects: over-parameterization and random weight initialization. It has been observed that over-parameterization and proper random initialization can help the optimization in training neural networks, and various theoretical results have been established \citep{safran2017spurious, du2018power, arora2018convergence, allen2018rnn, du2018gradient, li2018learning}. More specifically, \citet{safran2017spurious} showed that over-parameterization can help reduce the spurious local minima in one-hidden-layer neural networks with Rectified Linear Unit (ReLU) activation function. \citet{du2018power} showed that with over-parameterization, all local minima in one-hidden-layer networks with quardratic activation function are global minima. \citet{arora2018optimization} showed that over-parameterization introduced by depth can accelerate the training process using gradient descent (GD). \citet{allen2018rnn} showed that with over-parameterization and random weight initialization, both gradient descent and stochastic gradient descent (SGD) can find the global minima of recurrent neural networks.
The most related work to ours are \citet{li2018learning} and \citet{du2018gradient}. \citet{li2018learning} showed that for a one-hidden-layer network with ReLU activation function using over-parameterization and random initialization, GD and SGD can find the near global-optimal solutions in polynomial time with respect to the accuracy parameter $\epsilon$, training sample size $n$ and the data separation parameter $\delta$\footnote{More precisely, \citet{li2018learning} assumed that each data point is generated from distributions $\{\cD_{l}\}$, and $\delta$ is defined as $\delta : = \min_{i,j \in [l]}\{\text{dist}(\text{supp}(\cD_i),\text{supp}(\cD_j) )\}$.}. \citet{du2018gradient} showed that under the assumption that the population Gram matrix is not degenerate\footnote{More precisely, \citet{du2018gradient} assumed that the minimal singular value of $\Hb^\infty$ is greater than a constant, where $\Hb^\infty$ is defined as $\Hb^\infty_{i,j}: = \EE_{\wb \sim N(0,\Ib)} [\xb_i^\top\xb_j \ind\{\wb^\top\xb_i \geq 0, \wb^\top \xb_j \geq 0\}]$ and $\{\xb_i\}$ are data points.}, randomly initialized GD converges to a globally optimal solution of a one-hidden-layer network with ReLU activation function
and quadratic loss function. However, both \citet{li2018learning} and \citet{du2018gradient} only characterized the behavior of gradient-based method on one-hidden-layer shallow neural networks rather than on deep neural networks that are widely used in practice.
In this paper, we aim to advance this line of research by studying the optimization properties of gradient-based methods for deep ReLU neural networks.
In specific, we consider an $L$-hidden-layer fully-connected neural network with ReLU activation function. Similar to the one-hidden-layer case studied in \citet{li2018learning} and \citet{du2018gradient}, we study binary classification problem and show that both GD and SGD can achieve global minima of the training loss for any $L \geq 1$, with the aid of over-parameterization and random initialization. At the core of our analysis is to show that Gaussian random initialization followed by (stochastic) gradient descent generates a sequence of iterates within a small perturbation region centering around the initial weights. In addition, we will show that the empirical loss function of deep ReLU networks has very good local curvature properties inside the perturbation region, which guarantees the global convergence of (stochastic) gradient descent. More specifically, our main contributions are summarized as follows:
\begin{itemize}
\item We show that with Gaussian random initialization on each layer, when the number of hidden nodes per layer is at least $\tilde \Omega\big(\text{poly}(n,\phi^{-1},L)\big)$, GD can achieve zero training error within $\tilde O\big(\text{poly}(n,\phi^{-1}, L)\big)$ iterations, where $\phi$ is the data separation distance, $n$ is the number of training examples, and $L$ is the number of hidden layers. Our result can be applied to a broad family of loss functions, as opposed to cross entropy loss studied in \citet{li2018learning} and quadratic loss considered in \citet{du2018gradient}.
\item We also prove a similar convergence result for SGD. We show that with Gaussian random initialization on each layer, when the number of hidden nodes per layer is at least $\tilde \Omega\big(\text{poly}(n,\phi^{-1}, L)\big)$, SGD can also achieve zero training error within $\tilde O\big(\text{poly}(n,\phi^{-1}, L)\big)$ iterations.
\item
In terms of data distribution, we only make the so-called data separation assumption, which is more realistic than the assumption on the gram matrix made in \cite{du2018gradient}. The data separation assumption in this work is similar, but slightly milder, than that in \citet{li2017convergence} in the sense that
it holds as long as the data are sampled from a distribution with a constant margin separating different classes.
\end{itemize}
When we were preparing this manuscript, we were informed that two concurrent work \citep{allen2018convergence,du2018gradientdeep} has appeared on-line very recently.
Our work bears a similarity to \citet{allen2018convergence} in the high-level proof idea, which is to extend the results for two-layer ReLU networks in \citet{li2018learning} to deep ReLU networks. However, while \citet{allen2018convergence} mainly focuses on the regression problems with least square loss, we study the classification problems for a broad class of loss functions based on a milder data distribution assumption.
\citet{du2018gradientdeep} also studies the regression problem.
Compared to their work, our work is based on a different assumption on the training data and is able to deal with the nonsmooth ReLU activation function.
The remainder of this paper is organized as follows: In Section \ref{sec:related work}, we discuss the literature that are most related to our work. In Section \ref{sec:preliminaries}, we introduce the problem setup and preliminaries of our work. In Sections \ref{sec:main theory} and \ref{sec:proof of main theory}, we present our main theoretical results and their proofs respectively. We conclude our work and discuss some future work in Section \ref{sec:conclusions}.
\section{Related Work}\label{sec:related work}
Due to the huge amount of literature on deep learning theory, we are not able to include all papers in this big vein here. Instead, we review the following three major lines of research, which are most related to our work.
\noindent\textbf{One-hidden-layer neural networks with ground truth parameters}
Recently a series of work \citep{tian2017analytical,brutzkus2017globally, li2017convergence, du2017convolutional, du2017gradient, zhang2018learning} study a specific class of shallow two-layer (one-hidden-layer) neural networks, whose training data are generated by a ground truth network called ``teacher network". This series of work aim to provide recovery guarantee for gradient-based methods to learn the teacher networks based on either the population or empirical loss functions. More specifically, \citet{tian2017analytical} proved that for two-layer ReLU networks with only one hidden neuron, GD with arbitrary initialization on the population loss is able to recover the hidden teacher network. \citet{brutzkus2017globally} proved that GD can learn the true parameters of a two-layer network with a convolution filter. \citet{li2017convergence} proved that SGD can recover the underlying parameters of a two-layer residual network in polynomial time. Moreover, \citet{du2017convolutional,du2017gradient} proved that both GD and SGD can recover the teacher network of a two-layer CNN with ReLU activation function. \citet{zhang2018learning} showed that GD on the empirical loss function can recover the ground truth parameters of one-hidden-layer ReLU networks at a linear rate.
\noindent\textbf{Deep linear networks}
Beyond shallow one-hidden-layer neural networks, a series of recent work \citep{hardt2016identity,kawaguchi2016deep,bartlett2018gradient, arora2018convergence, arora2018optimization} focus on the optimization landscape of deep linear networks. More specifically, \citet{hardt2016identity} showed that deep linear residual networks have no spurious local minima. \citet{kawaguchi2016deep} proved that all local minima are global minima in deep linear networks. \citet{arora2018optimization} showed that depth can accelerate the optimization of deep linear networks. \citet{bartlett2018gradient} proved that with identity initialization and proper regularizer, GD can converge to the least square solution on a residual linear network with quadratic loss function, while \citet{arora2018convergence} proved the same properties for general deep linear networks.
\noindent\textbf{Generalization bounds for deep neural networks}
The phenomenon that deep neural networks generalize better than shallow neural networks have been observed in practice for a long time \citep{langford2002not}. Besides classical VC-dimension based results \citep{vapnik2013nature, anthony2009neural}, a vast literature have recently studied the connection between the generalization performance of deep neural networks and their architectures \citep{neyshabur2015norm, neyshabur2017exploring, neyshabur2017pac, bartlett2017spectrally, golowich2017size, arora2018stronger, allen2018generalization}. More specifically, \citet{neyshabur2015norm} derived Rademacher complexity for a class of norm-constrained feed-forward neural networks with ReLU activation function. \citet{bartlett2017spectrally} derived margin bounds for deep ReLU networks based on Rademacher complexity and covering number. \citet{neyshabur2017exploring, neyshabur2017pac} also derived similar spectrally-normalized margin bounds for deep neural networks with ReLU activation function using PAC-Bayes approach. \citet{golowich2017size} studied size-independent sample complexity of deep neural networks and showed that the sample complexity can be independent of both depth and width under additional assumptions. \citet{arora2018stronger} proved generalization bounds via compression-based framework. \citet{allen2018generalization} showed that an over-parameterized one-hidden-layer neural network can learn a one-hidden-layer neural network with fewer parameters using SGD up to a small generalization error, while similar results also hold for over-parameterized two-hidden-layer neural networks. | 0.061848 |
\begin{document}
\title[Partition lattices and authentication]
{Four-generated direct powers of partition lattices and authentication}
\author[G.\ Cz\'edli]{G\'abor Cz\'edli}
\address{University of Szeged, Bolyai Institute, Szeged,
Aradi v\'ertan\'uk tere 1, Hungary 6720}
\email{czedli@math.u-szeged.hu}
\urladdr{http://www.math.u-szeged.hu/~czedli/}
\date{\hfill {\tiny{\magenta{(\tbf{Always} check the author's website for possible updates!) }}}\ \red{\datum}}
\thanks{This research is supported by NFSR of Hungary (OTKA), grant number K 134851}
\subjclass{06C10}
\keywords{Partition lattice, equivalence lattice, four-generated lattice, Stirling number of the second kind, Bell number, secret key, authentication scheme, cryptography, crypto-system, commitment, semimodular lattice}
\dedicatory{Dedicated to Professor L\'aszl\'o Z\'adori on his sixtieth birthday}
\begin{abstract} For an integer $n\geq 5$, H.\ Strietz (1975) and L.\ Z\'adori (1986) proved that
the lattice $\Part n$ of all partitions of $\set{1,2,\dots,n}$ is four-generated. Developing L.\ Z\'adori's particularly elegant construction further, we prove that even the $k$-th direct power $\Part n^k$ of $\Part n$ is four-generated for many but only finitely many exponents $k$. E.g., $\Part{100}^k$ is four-generated for every $k\leq 3\cdot 10^{89}$, and it has a four element generating set that is not an antichain for every $k\leq 1.4\cdot 10^{34}$.
In connection with these results, we outline a protocol how to use these lattices in authentication and secret key cryptography.
\end{abstract}
\maketitle
\section{Introduction}
This paper is dedicated to L\'aszl\'o Z\'adori not only because of his birthday, but also because a nice construction from his very first mathematical paper is heavily used here.
Our starting point is that Strietz~\cite{strietz1,strietz2} proved in 1975 that
\begin{equation}\left.
\parbox{7.5cm}{the lattice $\Part n$ of all partitions of the (finite) set $\set{1,2,\dots,n}$ is a four-generated lattice.}
\,\,\right\}
\label{eqpbxstRszlT}
\end{equation}
A decade later, Z\'adori~\cite{zadori} gave a very elegant proof of this result (and proved even more, which is not used in the present paper). Z\'adori's construction has opened lots of perspectives; this is witnessed by Chajda and Cz\'edli~\cite{chcz},
Cz\'edli~\cite{czedlismallgen,czedlifourgen,czedlioneonetwo,czgfourgeneqatoms},
Cz\'edli and Kulin~\cite{czgkulin}, Kulin~\cite{kulin}, and Tak\'ach~\cite{takach}.
Our goal is to generalize \eqref{eqpbxstRszlT} from partition lattices to their direct powers; see Theorems~\ref{thmmain} and \ref{thmoot} later.
Passing from $\Part n$ to $\Part n^k$ has some content because of four reasons, which will be given with more details later;
here we only mention these reasons tangentially.
First, even the direct square of a four-generated lattice need not be four-generated. Second, if some direct power of a lattice is four-generated, then so are the original lattice and all of its other direct powers with smaller exponents; see Corollaries \ref{coroLprd} and \ref{corodjTd}.
Third, for each non-singleton finite lattice $L$, there is a (large) positive integer $k_0=k_0(L)$ such that for every $k\geq k_0$, the direct power $L^k$ is \emph{not} four-generated; this explains that the exponent is not arbitrary in our theorems. We admit that we could not
determine the set $\set{k: \Part n^k\text{ is four-generated}}$, that is, we could not find
the least $k_0$; this task will probably remain unsolved for long. Fourth, a whole section of this paper is devoted to the applicability of complicated lattices with few generators in Information Theory.
Although this paper has some links to Information Theory, it is primarily a \emph{lattice theoretical} paper.
Note that only some elementary facts, regularly taught in graduate (and often in undergraduate) algebra, are needed about lattices. For those who know how to compute the join of two equivalence relations the paper is probably self-contained. If not, then a small part of each of the monographs Burris and Sankappanavar \cite{burrsankapp},
Gr\"atzer~\cite{ggeneral,ggglt}, and Nation~\cite{nationbook}
can be recommended; note that \cite{burrsankapp} and \cite{nationbook} are freely downloadable at the time of writing.
\subsection*{Outline}
The rest of the paper is structured as follows. Section~\ref{sectztrms}
gives the rudiments of partition lattices and recalls Z\'adori's construction in details; these details will be used in the subsequent two sections. Section~\ref{sectprod} formulates and prove our first result, Theorem~\ref{thmmain}, which
asserts that $\Part n^k$ is four-generated for certain values of $k$. In Section~\ref{sectootwo}, we formulate and prove Theorem~\ref{thmoot} about the existence of a four-element generating set of order type $1+1+2$ in $\Part n^k$.
Finally, Section~\ref{sectauth} offers a protocol for authentication based on partition lattices and their direct powers; this protocol can also be used in secret key cryptography.
\section{Rudiments and Z\'adori's construction}\label{sectztrms}
Below, we are going to give some details in few lines for the sake of those not familiar with partition lattices and, in addition, we are going to fix the corresponding notation.
For a set $A$, a set of pairwise disjoint nonempty subsets of $A$ is a \emph{partition} of $A$ if the union of these subsets, called \emph{blocks}, is $A$. For example,
\begin{equation}
U=\set{\set{1,3},\set{2,4},\set{5}}
\label{eqUpPrrsB}
\end{equation}
is a partition of $A=\set{1,2,3,4,5}$. For pairwise distinct elements $a_1,\dots,a_k$ of $A$, the partition of $A$ with block $\set{a_1,\dots, a_k}$ such that all the other blocks are singletons will be denoted by
$\kequ{a_1,\dots a_k}$. Then, in our notation, $U$ from \eqref{eqUpPrrsB} is the same as
\begin{equation}
\equ13 + \equ24.
\label{eqczrggtbxmdGXP}
\end{equation}
For partitions $U$ and $V$ of $A$, we say that $U\leq V$ if and only if every block of $U$ is as subset of a (unique) block of $V$. With this ordering, the set of all partitions of $A$ turns into a lattice, which we denote by $\Part A$.
For brevity,
\begin{equation}
\text{$\Part n$ will stand for $\Part{\set{1,2,\dots,n}}$,}
\label{pbxPartnmM}
\end{equation}
and also for $\Part A$ when $A$ is a given set consisting of $n$ elements.
Associated with a partition $U$ of $A$, we define an \emph{equivalence relation} $\pi_U$ of $A$ as the collection of all pairs $(x,y)\in A^2$ such that $x$ and $y$ belong to the same block of $U$. As it is well known, the equivalence relations and the partitions of $A$ mutually determine each other, and $\pi_U\leq \pi_V$ (which is our notation for $\pi_U\subseteq \pi_V$) if and only if $U\leq V$. Hence, the \emph{lattice $\Equ A$ of all equivalence relations} of $A$ (in short, the \emph{equivalence lattice} of $A$) is isomorphic to $\Part A$. In what follows, we do not make a sharp distinction between a partition and the corresponding equivalence relation; no matter which of them is given, we can use the other one without warning. For example, \eqref{eqczrggtbxmdGXP} also denotes an equivalence relation associated with the partition given in \eqref{eqUpPrrsB}, provided the base set $\set{1,2,\dots,5}$ is understood. So we define and denote equivalences as the partitions above but we prefer to work in $\Equ A$ and $\Equ{n}=\Equ{\set{1,\dots,n}}$, because the lattice operations are easier to handle in $\Equ A$. For $\kappa,\lambda\in \Equ A$, the \emph{meet} and the \emph{join} of $\kappa$ and $\lambda$, denoted by $\kappa \lambda$ (or $\kappa\cdot\lambda$) and $\kappa+\lambda$, are the intersection and the transitive hull of the union of $\kappa$ and $\lambda$, respectively. The advantage of this notation is that the usual precedence rule allows us to write, say, $xy+xz$ instead of $(x\wedge y)\vee (x\wedge z)$.
\emph{Lattice terms} are composed from variables and join and meet operation signs in the usual way; for example, $f(x_1,x_2,x_3,x_4)=(x_1+x_2)(x_3+x_4)+(x_1+x_3)(x_2+x_4)$ is a quaternary lattice term.
Given a lattice $L$ and $a_1,\dots, a_k\in L$, the \emph{sublattice generated} by $\set{a_1,\dots,a_k}$ is denoted and defined by
\begin{equation}
\sublat{a_1,\dots,a_k}:=\set{f(a_1,\dots,a_k): a_1,\dots,a_k\in L,\,\,f\text{ is a lattice term}}.
\end{equation}
If there are pairwise distinct elements $a_1,\dots,a_k\in L$ such that $\sublat{a_1,\dots,a_k}=L$ then $L$ is said to be a \emph{$k$-generated lattice}.
Almost exclusively, we are going to define our equivalence relations by (undirected simple, edge-coloured) graphs. Every horizontal thin straight edge is $\alpha$-colored but its color, $\alpha$, is not always indicated in the figures. The thin straight edges of slope 1, that is the southwest-northeast edges, are $\beta$-colored while the thin straight edges with slope $-1$, that is the southeast-northwest edges, are $\gamma$-colored.
Finally, the thin \emph{solid} curved edges are $\delta$-colored.
(We should disregard the \emph{dashed} ovals at this moment. Note that except for Figure~\ref{figd4}, every edge is thin.)
Figure~\ref{figd1} helps to keep this convention in mind.
On the vertex set $A$, this figure and the other figures in the paper define an \emph{equivalence} (relation) $\alpha\in \Equ A$ in the following way: deleting all edges but the $\alpha$-colored ones, the components of the remaining graph are the blocks of the partition associated with $\alpha$. In other words, $\pair x y\in\alpha$ if and only if there is an $\alpha$-coloured path from vertex $x$ to vertex $y$ in the graph, that is, a path (of possibly zero length) all of whose edges are $\alpha$-colored. The equivalences $\beta$, $\gamma$, and $\delta$ are defined analogously. The success of Z\'adori's construction, to be discussed soon, lies in the fact of this visualization. Note that, to make our figures less crowded, the labels $\alpha,\dots,\delta$ are not always indicated but
\begin{equation}
\text{our convention, shown in Figure \ref{figd1}, defines the colour of the edges}
\label{eqtxtsdhClrsznzlD}
\end{equation}
even in this case.
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig1}}
\caption{Standard notation for this paper
\label{figd1}}
\end{figure}
Let us agree upon the following notation:
\begin{equation}\left.
\begin{aligned}
\sum_{\text{for all meaningful x}}&\equ{u_x}{v_x} \text{ will be denoted by }\cr
&\kern 2em\faequ{u_x}{v_x} \text{ or } \quad \faequ{u_y}{v_y};
\end{aligned}\,\right\}
\end{equation}
that is, each of $x$ and $y$ in subscript or superscript position will mean that a join is formed for all meaningful values of these subscripts or superscript. If only a part of the meaningful subscripts or superscripts are needed in a join, then the following notational convention will be in effect:
\begin{equation}
\xequ{u^{(i)}}{v^{(i)}}{i\in I}\quad\text{ stands for }\quad
\sum_{i\in I}\equ{u^{(i)}}{v^{(i)}}.
\end{equation}
For an integer $k\geq 2$ and the $(2k+1)$-element set
\[Z=Z(2k+1):=\set{a_0,a_1,\dots,a_k, b_0,b_1,\dots, b_{k-1}},
\]
we define
\begin{equation}
\begin{aligned}
&\alpha:=\kequ{a_0,a_1,\dots a_k} +\kequ{b_0,b_1,\dots b_{k-1}}=
\equ{a_x}{a_{x+1}}+\equ{b_y}{b_{y+1}}\cr
&\beta:=\faequ{a_x}{b_x}=\xequ{a_i}{b_i}{0\leq i\leq k-1},\cr
&\gamma:=\faequ{a_{x+1}}{b_x} = \xequ{a_{i+1}}{b_i}{0\leq i\leq k-1},\cr
&\delta:=\equ{a_0}{b_0}+\equ{a_k}{b_{k-1}};
\end{aligned}
\label{eqsBzTrGhxQ}
\end{equation}
see Figure~\ref{figd2}.
Then the system $\tuple{Z(2k+1);\alpha,\beta,\gamma,\delta}$ is called a $(2k+1)$-element \emph{Z\'adori configuration}. Its importance is revealed by the following lemma.
\begin{lemma}[Z\'adori~\cite{zadori}]\label{lemmazadori} For $k\geq 2$,
$\sublat{\alpha,\beta,\gamma,\delta}=\Equ{Z(2k+1)}$, that is, the four partitions in \eqref{eqsBzTrGhxQ} of the Z\'adori configuration generate the lattice of all equivalences of $Z(2k+1)$. Consequently,
\begin{equation}
\sublat{\alpha,\beta,\gamma, \equ{a_0}{b_0}, \equ{a_k}{b_{k-1}}}=\Equ{Z(2k+1)}.
\label{eqzLhetzB}
\end{equation}
\end{lemma}
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig2}}
\caption{The Z\'adori configuration of odd size $2k+1$ with $k=6$
\label{figd2}}
\end{figure}
We shall soon outline the proof of this lemma since we are going to use its details in the paper. But firstly, we formulate another lemma from Z\'adori \cite{zadori}, which has also been used in Cz\'edli \cite{czedlismallgen,czedlifourgen,czedlioneonetwo}
and in other papers like Kulin~\cite{kulin}. We are going to recall its proof only for later reference.
\begin{lemma}[``Circle Principle'']\label{lemmaHamilt}
If $d_0,d_1,\dots,d_{n-1}$ are pairwise distinct elements of a set $A$ and $0\leq u<v\leq n-1$, then
\begin{equation}\left.
\begin{aligned}
\equ {d_u}{d_v}=\bigl(\equ{d_{u}}{d_{u+1}} + \equ{d_{u+1}}{d_{u+2}}\dots + \equ{d_{v-1}}{d_{v}} \bigr) \cdot
\bigl( \equ{d_{v}}{d_{v+1}}
\cr
+ \dots + \equ{d_{n-2}}{d_{n-1}} +
\equ{d_{n-1}}{d_{0}}
+ \equ{d_{0}}{d_{1}}+ \dots + \equ{d_{u-1}}{d_{u}} \bigr)
\end{aligned}\,\right\}
\label{eqGbVrTslcdNssm}
\end{equation}
holds in $\Equ A$.
If, in addition, $A=\set{d_0,d_1,\dots,d_{n-1}}$, then
$\Equ A$ is generated by
\[\set{\equ{d_{n-1}}{d_0} }\cup\bigcup_{0\leq i\leq n-2} \set{\equ{d_{i}}{d_i+1} }.
\]
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemmaHamilt}] \eqref{eqGbVrTslcdNssm} is trivial. The second half of the lemma follows from the fact that for a finite $A$, the lattice $\Equ A$ is \emph{atomistic}, that is, each of its elements is the join of some atoms.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemmazadori}]
On the set $\set{\oal,\obe,\oga,\ode}$ of variables, we are going to define several quaternary terms recursively. But first of all, we define the quadruple
\begin{equation}
\obmu:=\tuple{\oal,\obe,\oga,\ode}
\label{eqMnPkBhJsZfPmF}
\end{equation}
of four variables with the purpose of abbreviating our quaternary terms $t(\oal,\obe,\oga,\ode)$ by $t(\obmu)$.
We let
\begin{equation}\left.
\begin{aligned}
g_0(\obmu)&:= \obe\,\ode\text{ (i.e.,}=\obe\wedge\ode), \cr
h_{i+1}(\obmu)&:=((g_i(\obmu)+\oga)\oal+g_i(\obmu))\oga\text{ for }i\geq 0,\cr
g_{i+1}(\obmu)&:=((h_{i+1}(\obmu)+\obe)\oal+h_{i+1}(\obmu))\obe\text{ for }i\geq 0,\cr
H_0(\obmu)&:=\oga\ode,\cr
G_{i+1}(\obmu)&:=((H_i(\obmu)+\obe)\oal + H_i(\obmu))\obe \text{ for }i\geq 0,\cr
H_{i+1}(\obmu)&:=((G_{i+1}(\obmu)+\oga)\oal+G_{i+1}(\obmu))\oga \text{ for }i\geq 0.
\end{aligned}
\,\,\right\}
\label{eqZhgRsMks}
\end{equation}
For later reference, let us point out that
\begin{equation}\left.
\parbox{5.4cm}{in \eqref{eqZhgRsMks}, $\delta$ is used only twice: to define $g_0(\obmu)$ and to define $H_0(\obmu)$.}
\,\,\right\}
\label{eqpbxTwCzWn}
\end{equation}
Next, in harmony with \eqref{eqsBzTrGhxQ} and Figure~\ref{figd2}, we let
\begin{equation}
\bmu:=\tuple{\alpha,\beta,\gamma,\delta}.
\label{eqMdhzBfTnQslwvP}
\end{equation}
Clearly,
\begin{equation}
\beta\delta=\equ{a_0}{b_0}\,\,\text{ and }\,\,\gamma\delta=\equ{a_k}{b_{k-1}}.
\label{eqNTjgMzdD}
\end{equation}
An easy induction shows that
\begin{equation}\left.
\begin{aligned}
g_i(\bmu)&:=\xequ{a_j}{b_j}{0\leq j\leq i} \text{ for }0\leq i\leq k-1, \cr
h_{i}(\bmu)&:=\xequ{a_j}{b_{j-1}}{1\leq j\leq i}\text{ for }1\leq i\leq k,\cr
H_i(\bmu)&:=\xequ{a_{k-j}}{b_{k-1-j}}{0\leq j\leq i} \text{ for }0\leq i\leq k-1, \cr
G_{i}(\bmu)&:=\xequ{a_{k-j}}{b_{k-j}}{1\leq j\leq i}\text{ for }1\leq i\leq k.\cr
\end{aligned}
\,\,\right\}
\label{eqmlZrbQshPrk}
\end{equation}
Next, for certain edges $\pair u v$ of the graph given in Figure~\ref{figd2}, we define a corresponding lattice term $\eterm u v(\obmu)$ as follows.
\begin{equation}\left.
\begin{aligned}
\eterm{a_i}{b_i}(\obmu)&:=g_i(\obmu)\cdot G_{k-i}(\obmu),\quad
\text{for }0\leq i\leq k-1,\cr
\eterm{a_i}{b_{i-1}}(\obmu)&:=h_i(\obmu)\cdot H_{k-i}(\obmu),\quad \text{for }1\leq i\leq k\cr
\eterm{a_i}{a_{i+1}}(\obmu)&:=\oal\cdot (\eterm{a_i}{b_i}(\obmu) + \eterm{a_{i+1}}{b_{i}}(\obmu)),\quad \text{for }0\leq i\leq k-1,\cr
\eterm{b_i}{b_{i+1}}(\obmu)&:=\oal\cdot (\eterm{a_{i+1}}{b_i}(\obmu) + \eterm{a_{i+1}}{b_{i+1}}(\obmu)),\quad 0\leq i\leq k-2.
\end{aligned}
\,\right\}
\label{eqmlZsknNgRvTk}
\end{equation}
The first two equalities below follow from \eqref{eqmlZrbQshPrk}, while
the third and the fourth from the first two.
\begin{equation}\left.
\begin{aligned}
\eterm{a_i}{b_i}(\bmu)&=\equ{a_i}{b_i},\quad
\text{for }0\leq i\leq k-1,\cr
\eterm{a_i}{b_{i-1}}(\bmu)&=\equ{a_i}{b_{i-1}}\quad \text{for }1\leq i\leq k\cr
\eterm{a_i}{a_{i+1}}(\bmu)&=
\equ{a_i}{a_{i+1}}\quad \text{for }0\leq i\leq k-1,\cr
\eterm{b_i}{b_{i+1}}(\bmu)&=
\equ{b_i}{b_{i+1}}, \quad 0\leq i\leq k-2.
\end{aligned}
\,\right\}
\label{eqmlspnSzdRkMblk}
\end{equation}
Finally, let
\begin{equation}\tuple{d_0,d_1,\dots,d_{n-1}}:=\tuple{a_0,a_1,\dots,a_k,b_{k-1},b_{k-2},\dots, b_0}.
\label{eqdsrzLmJnszpkJvStT}
\end{equation}
In harmony with \eqref{eqGbVrTslcdNssm}, we define the following term
\begin{equation}\left.
\begin{aligned}
\eterm{d_u}{d_v}(\obmu):=\bigl(\eterm{d_{u}}{d_{u+1}}(\obmu) + \eterm{d_{u+1}}{d_{u+2}}(\obmu)\dots + \eterm{d_{v-1}}{d_{v}}(\obmu) \bigr) \cdot
\bigl( \eterm{d_{v}}{d_{v+1}}(\obmu)
\cr
+ \dots + \eterm{d_{n-2}}{d_{n-1}}(\obmu) +
\eterm{d_{n-1}}{d_{0}}
+ \eterm{d_{0}}{d_{1}}(\obmu)+ \dots + \eterm{d_{u-1}}{d_{u}}(\obmu) \bigr)
\end{aligned}\,\right\}
\label{eqnbVrtzstVcX}
\end{equation}
for $0\leq u< v\leq n-1= 2k$. Combining \eqref{eqGbVrTslcdNssm}, \eqref{eqmlspnSzdRkMblk}, and \eqref{eqnbVrtzstVcX}, we obtain that
\begin{equation}
\eterm{d_u}{d_v}(\bmu)= \equ{d_u}{d_v}.
\label{eqczhhndPMtlvRdwdb}
\end{equation}
Based on \eqref{eqpbxTwCzWn}, note at this point that in \eqref{eqZhgRsMks}, \eqref{eqmlZsknNgRvTk}, and \eqref{eqnbVrtzstVcX}, $\delta$ is used only twice: to define $g_0(\obmu)$ and to define $H_0(\obmu)$. Consequently, taking \eqref{eqNTjgMzdD} also into account, we conclude that
\begin{equation}\left.
\parbox{9.0cm}{equality \eqref{eqczhhndPMtlvRdwdb} remains valid if $\delta$, the fourth component of $\bmu$, is replaced by any other partition whose meet with $\beta$ and that with $\gamma$ are $\equ {a_0}{b_0}$ and $\equ {a_k}{b_{k-1}}$, respectively.} \,\,\right\}
\label{eqpbxZbntGhSwD}
\end{equation}
Since every atom of $\Equ{Z(2k+1)}$ is of the form \eqref{eqczhhndPMtlvRdwdb} and $\Equ{Z(2k+1)}$ is an atomistic lattice, $\sublat{\alpha,\beta,\gamma,\delta}=\Equ{Z(n)}$.
In virtue of \eqref{eqpbxZbntGhSwD} and since $\ode$ has been used only twice, \eqref{eqzLhetzB} also holds, completing the proof of Lemma~\ref{lemmazadori}.
\end{proof}
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig3}}
\caption{A configuration for even size $2k+2$ with $k=3$
\label{figd3}}
\end{figure}
Next, for $k\geq 2$, we add a new vertex $c$, a $\beta$-colored edge $\pair{b_0}{c}$, and a $\gamma$-colored edge $\pair{b_{2}}{c}$
to $Z(2k+1)$ to obtain $Z(2k+2)$, see Figure~\ref{figd3}. This configuration is different from what Z\'adori~\cite{zadori} used for the even case; our approach by Figure~\ref{figd3} is simpler and fits better to our purposes. Again, the dashed curved edges of Figure~\ref{figd3} should be disregarded until otherwise is stated.
\begin{lemma}\label{lemmazeven}
For $n=2k+2\geq 6$, we have that $\Equ{Z(n)}=\sublat{\alpha,\beta,\gamma, \delta}$.
\end{lemma}
\begin{proof} With the short terms $\oal^\ast{}=\oal$, $\obe^\ast{}:=\obe(\oal+\ode)$, $\oga^\ast{}:=\oga(\oal+\ode)$, and $\ode^\ast{}:=\ode$, we define
$\obmu^\ast{}:=\tuple{\oal^\ast{},\obe^\ast{},\oga^\ast{},\ode^\ast{}}$. For each term $t$ defined in \eqref{eqZhgRsMks} and \eqref{eqmlZsknNgRvTk}, we define a term $t^\ast{}$ as
$t^\ast{}(\obmu):=t(\obmu^\ast{})$.
We also need the corresponding partitions $\alpha^\ast{}:=\alpha$, $\beta^\ast{}:=\beta(\alpha+\delta)$, $\gamma^\ast{}:=\gamma(\alpha+\delta)$,
$\delta^\ast{}:=\delta$, and the quadruple $\bmu^\ast{}:=\tuple{\alpha^\ast{},\beta^\ast{},\gamma^\ast{},\delta^\ast{}}$.
Apart from the singleton block $\set c$,
they are the same as the partitions considered in Lemma~\ref{lemmazadori} for $Z(2k+1)$. Hence, it follows that
\eqref{eqmlZrbQshPrk}, \eqref{eqmlspnSzdRkMblk}, and \eqref{eqczhhndPMtlvRdwdb} hold with $\bmu^\ast{}$ instead of $\bmu$.
In other words, they hold with $\bmu$ if the terms $t$ are replaced by the corresponding terms $t^\ast$.
In particular, \eqref{eqczhhndPMtlvRdwdb} is reworded as follows:
\begin{equation}
\eterm x y^\ast(\bmu)=\equ x y\quad\text{ for all }x,y\in Z(n)\setminus\set c.
\label{eqmsSzvKlJjJrlcsPsm}
\end{equation}
So if we define (without defining their ``asterisk-free versions'' $\eterm{a_0}{c}$ and $\eterm{a_2}{c}$) the terms
\begin{equation}
\eterm{a_0}{c}^\ast(\obmu):=\obe\cdot\bigl(\oga+\eterm{a_0}{a_{2}}(\obmu^\ast{}) \bigr)\text{ and }
\eterm{a_{2}}{c}^\ast(\obmu):=\oga\cdot\bigl(\obe+\eterm{a_0}{a_{2}}(\obmu^\ast{}) \bigr),
\label{eqdzltGmrkszTnldpfGl}
\end{equation}
then it follows easily that
\begin{equation}
\eterm{a_0}{c}^\ast(\bmu)= \equ{a_0}{c}\,\,\text{ and }\,\,
\eterm{a_2}{c}^\ast(\bmu)= \equ{a_2}{c};
\label{eqhfnWhtjCxT}
\end{equation}
remark that in addition to \eqref{eqmsSzvKlJjJrlcsPsm}, \eqref{eqhfnWhtjCxT} also belongs to the scope of \eqref{eqpbxZbntGhSwD}.
Let
\begin{equation}
\tuple{d_0,d_1,\dots,d_{n-1}}
:=\tuple{a_0,c,a_2,a_3,\dots,a_{k},b_{k-1},b_{k-2},\dots,b_1,a_1,b_0}.
\label{eqchtFkJknFsgsN}
\end{equation}
Similarly to \eqref{eqnbVrtzstVcX} but now
based on \eqref{eqchtFkJknFsgsN} rather than \eqref{eqdsrzLmJnszpkJvStT}, we define the following term (without defining its ``non-asterisked'' $\fterm{d_u}{d_v}$ version)
\begin{equation}\left.
\begin{aligned}
\fterm{d_u}{d_v}^\ast (\obmu):=\bigl(\eterm{d_{u}}{d_{u+1}}^\ast (\obmu) + \eterm{d_{u+1}}{d_{u+2}}^\ast (\obmu)\dots + \eterm{d_{v-1}}{d_{v}}^\ast (\obmu) \bigr) \cdot
\bigl( \eterm{d_{v}}{d_{v+1}}^\ast (\obmu)
\cr
+ \dots + \eterm{d_{n-2}}{d_{n-1}}^\ast (\obmu) +
\eterm{d_{n-1}}{d_{0}}
+ \eterm{d_{0}}{d_{1}}^\ast (\obmu)+ \dots + \eterm{d_{u-1}}{d_{u}}^\ast (\obmu) \bigr)
\end{aligned}\,\right\}
\label{eqnvlqLY}
\end{equation}
for $0\leq u<v < n$. By Lemma~\ref{lemmaHamilt}, \eqref{eqmsSzvKlJjJrlcsPsm}, \eqref{eqhfnWhtjCxT}, and \eqref{eqnvlqLY}, we obtain that
\begin{equation}
\fterm x y^\ast(\bmu)=\equ x y\quad\text{ for all }x\neq y\in Z(n).
\label{eqnmsdsulTVcX}
\end{equation}
The remark right after \eqref{eqhfnWhtjCxT} allows us to note that
\begin{equation}
\text
{\eqref{eqnmsdsulTVcX} also belongs to the scope of \eqref{eqpbxZbntGhSwD}.}
\label{eqnZhgjdlSrkhllRw}
\end{equation}
Finally, \eqref{eqnmsdsulTVcX} implies Lemma~\ref{lemmazeven} since $\Equ{Z(n)}$ is atomistic.
\end{proof}
\section{Generating direct powers of partition lattices}\label{sectprod}
Before formulating the main result of the paper, we recall some notations and concepts. The lower integer part of a real number $x$ will be denoted by $\lfloor x\rfloor$; for example,
$\lfloor \sqrt 2\rfloor=1$ and $\lfloor 2\rfloor=2$. The set of positive integer numbers will be denoted by $\NN$. For $n\in\NN$, the number of partitions of the $n$-element set $\set{1,2,\dots,n}$, that is, the size of $\Part n\cong \Equ n$ is the so-called $n$-th \emph{Bell number}; it will be denoted by $\Bell n$.
The number of partitions of $n$ objects with exactly $r$ blocks is denoted by $S(n,r)$; it is the \emph{Stirling number of the second kind} with parameters $n$ and $r$. Note that $S(n,r)\geq 1$ if and only if $1\leq r\leq n$; otherwise $S(n,r)$ is zero. Clearly, $\Bell n=S(n,1)+S(n,2)+\dots +S(n,n)$.
Let
\begin{equation}
\text{$\maxs(n)$ denote the maximal element of the set $\set{S(n,r): r\in\NN}$.}
\label{eqtxtmMxSdfLm}
\end{equation}
We know from
Rennie and Dobson~\cite[page 121]{renniedobson} that
\begin{equation}
\log \maxs(n)= n \log n - n\log \log n - n + O\left( n\cdot {\frac{\log\log n}{\log n}} \right).
\end{equation}
Hence, $\maxs(n)$ is quite large; see Tables~\eqref{tablerdDbsa}--\eqref{tablerdDbsc} and \eqref{tablerdDbsg} for some of its values; note that those given in exponential form are only rounded values.
Some rows occurring in these tables, computed by Maple V. Release 5 (1997) under Windows 10, will be explained later.
\allowdisplaybreaks{
\begin{align}
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$\,1$&&$\,2$&&$\,3$&&$\,4$&&$5$&&$6$&&$7$&&$8$&&$9$&&$10$&&$11$&&$12$&
\cr\vonal\vonal
&&$\maxs(n)$&&$1$&&$1$&&$3$&&$7$&&$25$&&$90$&&$350$&&$1\,701$&&$7\,770$&&$42\,525$&&$246\,730$&&$1\,379\,400$&
\cr\vonal
&&\hfill$m(n)$&&$\phantom a$&&$\phantom b$&&$\phantom c$&&$\phantom d$&&$1$&&$1$&&$3$&&$3$&&$21$&&$21$&&$175$&&$175$&\cr
\vonal
&&\hfill$\csm(n)$&&$\phantom a$&&$\phantom b$&&$\phantom c$&&$\phantom d$&&$\phantom d$&&$\phantom e$&&$1$&&$1$&&$1$&&$1$&&$2$&&$2$&\cr
\vonal\vonal\vonal\vonal
}}
\label{tablerdDbsa}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$13$&&$14$&&$15$&&$16$&&$17$&
\cr\vonal\vonal
&&$\maxs(n)$&&$9\,321\,312$&&$63\,436\,373$&&$420\,693\,273$&&$3\,281\,882\,604$&&$25\,708\,104\,786$&
\cr
\vonal
&&\hfill $m(n)$&&$2\,250$&&$2\,250$&&$31\,500$&&$31\,500$&&$595\,350$&
\cr
\vonal
&&\hfill $\csm(n)$&&$2$&&$2$&&$9$&&$9$&&$9$&
\cr
\vonal\vonal\vonal\vonal
}}
\label{tablerdDbsb}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$18$&&$19$&&$20$&
\cr\vonal\vonal
&&$\maxs(n)$&&$1\,974\,624\,834\,000$&&$1\,709\,751\,003\,480$&&$15\,170\,932\,662\,679$&
\cr\vonal
&&\hfill$m(n)$&&$595\,350$&&$13\,216\,770$&&$13\,216\,770$&
\cr\vonal
&&\hfill$\csm(n)$&&$9$&&$49$&&$49$&
\cr\vonal\vonal\vonal\vonal
}}
\label{tablerdDbsc}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$21$&&$22$&&$23$&&$24$&&$25$&
\cr\vonal\vonal
&&\hfill$m(n)$&&$330\,419\,250$&&$330\,419\,250$&&$10\,492\,193\,250 $&&$10\,492\,193\,250 $&& $ 3.40\cdot 10^{11} $ &
\cr\vonal
&&\hfill$\csm(n)$&&$49$&&$49$&&$625$&&$625 $&& $625 $ &
\cr\vonal\vonal\vonal\vonal
}}
\label{tablerdDbsd}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$26$&&$27$&&$28$&&$29$&&$30$&&$31$&
\cr\vonal\vonal
&&\hfill$m(n)$&&$3.40\cdot 10^{11}$&&$1.29\cdot 10^{13}$&&$1.29\cdot 10^{13}$&&$5.91\cdot 10^{14}$&& $5.91\cdot 10^{14}$ && $2.67\cdot 10^{16}$ &
\cr\vonal
&&\hfill$\csm(n)$&&$625$&&$8100$&&$8100$&&$8100$&& $8100$ && $122500 $ &
\cr\vonal\vonal\vonal\vonal
}}
\label{tablerdDbse}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$32$&&$33$&&$34$&&$35$&&$36$&&$37$&
\cr\vonal\vonal
&&\hfill$m(n)$&&$2.67\cdot 10^{16}$&&$1.38\cdot 10^{18}$&&$1.38\cdot 10^{18}$&&$8.44\cdot 10^{19}$&& $8.44\cdot 10^{19}$ && $5.08\cdot 10^{21}$ &
\cr\vonal
&&\hfill$\csm(n)$&&$122500$&&$122500$&&$122500$&&$2893401$&& $2893401$ && $2893401$ &
\cr\vonal\vonal\vonal\vonal
}}
\label{tablerdDbsf}
\\
&
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule
\cr\vonal\vonal\vonal\vonal
&&\hfill$n$&&$97$&&$98$&&$99$&&$100$&&$2020$&
\cr\vonal\vonal
&&\hfill$\maxs(n)$&&$3.22\cdot 10^{110}$&&$9.31\cdot 10^{111}$&&$2.69\cdot 10^{113}$&& $7.77\cdot 10^{114}$ &&$3.81\cdot10^{4398}$&
\cr\vonal
&&\hfill$m(n)$&&$1.08\cdot 10^{87}$&&$1.08\cdot 10^{87}$&&$3.09\cdot 10^{89}$&&$3.09\cdot 10^{89}$&&$5.52\cdot 10^{3893}$&
\cr\vonal
&&\hfill$\csm(n)$&&$1.52\cdot 10^{32}$&&$1.52\cdot 10^{32}$&&$1.45\cdot 10^{34}$&& $1.45\cdot 10^{34}$ &&$3.97\cdot 10^{1700}$&
\cr\vonal\vonal\vonal\vonal
}}
\label{tablerdDbsg}
\end{align}
The aim of this section is to prove the following theorem; \eqref{pbxPartnmM} and \eqref{eqtxtmMxSdfLm} are still in effect.
\begin{theorem}\label{thmmain} Let $n\geq 5$ be an integer, let
$k:=\lfloor (n-1)/2 \rfloor$, and let
\begin{equation}
m=m(n):=\maxs(k)\cdot \maxs(k-1).
\label{eqmSrpRdgB}
\end{equation}
Then $\Part n^m$ or, equivalently, $\Equ n^m$ is four-generated. In other words, the $m$-th direct power of the lattice of all partitions of the set $\set{1,2,\dots, n}$ is generated by a four-element subset.
\end{theorem}
Some values of $m(n)$ are given in Tables~\eqref{tablerdDbsa}--\eqref{tablerdDbsg}. Before proving this theorem, we formulate some remarks and corollaries and we make some comments.
\begin{corollary}\label{coroLprd}
Let $n$ and $m$ as in Theorem \ref{thmmain}. Then for every integer $t$ with $1\leq t\leq m$, the direct power $\Part n^t$ is four-generated. In particular, $\Part n$ in itself is four-generated.
\end{corollary}
The second half of Corollary~\ref{coroLprd} shows that Theorem~\ref{thmmain} is a stronger statement than the Strietz--Z\'adori result; see \eqref{eqpbxstRszlT} in the Introduction. This corollary follows quite easily from Theorem~\ref{thmmain} as follows.
\begin{proof}[Proof of Corollary~\ref{coroLprd}]
Since the natural projection $\Part n^m \to \Part n^t$, defined by $\tuple{x_1,\dots, x_m}\mapsto \tuple{x_1,\dots, x_t}$, sends a 4-element generating set into an at most 4-element generating set, Theorem~\ref{thmmain} applies.
\end{proof}
\begin{remark} We cannot say that $m=m(n)$ in Theorem~\ref{thmmain} is the largest possible exponent. First, because the proof that we are going to present relies on a particular construction and we do not know whether there exist better constructions for this purpose.
Second, because we use Stirling numbers of the second kind to give a lower estimate of the size of a maximum-sized antichain in partition lattices, and we know from
Canfield~\cite{canfield} that this estimate is not sharp. However, this fact would not lead to a reasonably esthetic improvement of Theorem~\ref{thmmain}.
\end{remark}
\begin{remark}\label{remnVrbG} If $n$ and $t$ are positive integers such that $n\geq 4$ and
\begin{equation}
t > \Bell n \cdot \Bell{n-1}\cdot \Bell{n-2}\cdot \Bell{n-3} ,
\label{eqthmBsrhzTq}
\end{equation}
then $\Part n^t$ is not four-generated. Thus, the exponent in Theorem~\ref{thmmain} cannot be arbitrarily large.
\end{remark}
The product occurring in \eqref{eqthmBsrhzTq} is much larger than $m(n)$ in \eqref{eqmSrpRdgB}. Hence, there is a wide interval of integers $t$ such that we do not know whether $\Part n^t$ is four-generated or not.
\begin{proof}[Proof of Remark~\ref{remnVrbG}]
Let $p$ denote the product in \eqref{eqthmBsrhzTq}.
For the sake of contradiction, suppose that $t>p$ but $\Part n^t$ is generated by some $\set{\alpha,\beta,\gamma,\delta}$. Here $\alpha=\tuple{\alpha_1,\alpha_2,\dots,\alpha_t}$
with all the $\alpha_i\in\Part n$, and similarly for $\beta$, $\gamma$, and $\delta$. By the easy argument proving Corollary~\ref{coroLprd}, we know that $\set{\alpha_i,\beta_i,\gamma_i,\delta_i}$ generates $\Part n$ for all $i\in\set{1,\dots, t}$.
Since $\Part n$ is not 3-generated by Z\'adori~\cite{zadori}, the quadruple $\tuple{\alpha_i,\beta_i,\gamma_i,\delta_i}$ consists of pairwise distinct components. But there are only $p$ such quadruples, whereby the the pigeonhole principle yields two distinct subscripts $i$ and $j$ $\in\set{1,\dots, t}$
such that $\tuple{\alpha_i,\beta_i,\gamma_i,\delta_i}=\tuple{\alpha_j,\beta_j,\gamma_j,\delta_j}$. Hence, for every quaternary lattice term $f$, we have that
$f(\alpha_i,\beta_i,\gamma_i,\delta_i)=f(\alpha_j,\beta_j,\gamma_j,\delta_j)$. This implies that for every
$\eta=\tuple{\eta_1,\dots,\eta_t}\in \sublat{\alpha,\beta,\gamma,\delta}$, we have that $\eta_i=\eta_j$. Thus, $\sublat{\alpha,\beta,\gamma,\delta}\neq \Part n^t$, which is a contradiction proving Remark~\ref{remnVrbG}.
\end{proof}
\begin{remark}\label{remngyznnGynrGcrsT} For a four-generated finite lattice $L$, the direct square $L^2$ of $L$ need not be four-generated. For example, if $L$ is the distributive lattice generated freely by four elements, then there exists no $t\geq 2$ such that $L^t$ is four-generated.
\end{remark}
\begin{proof}
Let $t\geq 2$, and let $L$ be the free distributive lattice on four generators. Observe that $L^t$ is distributive. So if $L^t$ was four-generated, then it would be a homomorphic image of $L$ and $|L|^t=|L^t|\leq |L|$ would be a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thmmain}]
Since the notation of the elements of the base set is irrelevant, it suffices to show that $\Equ{Z(n)}^m$ is four-generated. No matter if $n$ is odd or even, we
use the notation $k$, $a_i$ and $b_j$ as in Figures~\ref{figd2} and \ref{figd3}. We are going to define $\valpha=\tuple{\alpha_1,\dots, \alpha_m}$, $\vbeta=\tuple{\beta_1,\dots, \beta_m}$, $\vgamma=\tuple{\gamma_1,\dots, \gamma_m}$, and $\vshd=\tuple{\shd_1,\dots, \shd_m}$ so that $\set{\valpha,\vbeta,\vgamma,\vshd}$ generates $\Equ{Z(n)}^m$. For every $i\in\set{1,\dots,m}$, $\alpha_i$, $\beta_i$, and $\gamma_i$ are defined as in Figures~\ref{figd2} and \ref{figd3}, that is, as in the proofs of Lemmas~\ref{lemmazadori} and \ref{lemmazeven}.
However, the definition of the equivalences $\shd_i$ is going to be more tricky.
Let $\delta_i:=\equ{a_0}{b_0}+\equ{a_k}{b_{k-1}}$, as in Lemmas~\ref{lemmazadori} and \ref{lemmazeven}. Note that
\begin{equation}
\text{none of $\alpha_i:=\alpha$, $\beta_i:=\beta$, $\gamma_i:=\gamma$, and $\delta_i:=\delta$ depends on $i$.}
\label{eqtxtngkSmfmRstsRz}
\end{equation}
We know from Lemmas~\ref{lemmazadori} and \ref{lemmazeven} that $\set{\alpha_i,\beta_i,\gamma_i,\delta_i}$ generates $\Equ{Z(n)}$. Therefore, for any two distinct elements $u$ and $v$ of $Z(n)$, we can pick a quaternary lattice term $\fterm u v= \fterm u v(\oal,\obe,\oga,\ode)$ with variables $\obmu:=\tuple{\oal,\obe,\oga,\ode}$ such that, in virtue of \eqref{eqczhhndPMtlvRdwdb} and \eqref{eqnmsdsulTVcX},
\begin{equation}\left.
\parbox{8.7cm}{depending on the parity of $n$, $\fterm u v$ is $\eterm u v$ from the proof of Lemma~\ref{lemmazadori} or it is
$\fterm u v^\ast$ from that of Lemma~\ref{lemmazeven}, and
$\fterm u v(\alpha_i,\beta_i,\gamma_i,\delta_i)=\equ u v \in \Equ{Z(n)}$.}\,\,\right\}
\label{eqfTmrzGb}
\end{equation}
By defining $\fterm u u$ to be the meet of its four variables, the validity of\eqref{eqfTmrzGb} extends to the case $u=v$, where $\equ u u$ is understood as the least partition, that is, the partition with all of its blocks being singletons.
Next, let
\begin{equation}
U:=\set{a_1,a_2\dots, a_{k-1}}\,\text{ and }\,W:=\set{b_0,b_1,\dots,b_{k-1}};
\label{eqwmghUV}
\end{equation}
these sets are indicated by dashed ovals in Figures~\ref{figd2}, \ref{figd3}, and \ref{figd4}.
By the definition of $\maxs(k-1)$, we can pick an integer
$r'\in\NN$ such that there are exactly $\maxs(k-1)$ equivalences of $U$ with exactly $r'$ blocks. (By a block of an equivalence we mean a block of the corresponding partition.) Let $\aGa$ denote the set of these ``$r'$-block equivalences'' of $U$. Clearly, $\aGa$ is an antichain in $\Equ U$ with size $|\aGa|=\maxs(k-1)$.
Similarly, $\maxs(k)$ is the number of $r''$-block equivalences for some $r''\in\NN$ and the $r''$-block equivalences of $W$ form an antichain $\aHa\subseteq \Equ W$ such that $|\aHa|=\maxs(k)$.
Observe that, in the direct product $\Equ U\times\Equ W$,
\begin{equation}
\aGa\times \aHa \text{ is an antichain}.
\label{eqanTiChaIn}
\end{equation}
Since $|\aGa\times \aHa|=|\aGa|\cdot|\aHa|=\maxs(k-1)\cdot \maxs(k)=m$, see \eqref{eqmSrpRdgB},
we can enumerate $\aGa\times \aHa$ in the following repetition-free list of length $m$ as follows:
\begin{equation}
\aGa\times \aHa=\set{\pair{\kappa_1}{\lambda_1}, \pair{\kappa_2}{\lambda_2}, \dots, \pair{\kappa_m}{\lambda_m} }.
\label{eqchzTnFshRp}
\end{equation}
For each $i\in\set{1,\dots,m}$, we define $\shd_i$ as follows:
\begin{equation}
\shd_i:= \text{the equivalence generated by }\delta_i \cup \kappa_i \cup \lambda_i;
\label{eqctznBkPrDTrMp}
\end{equation}
this makes sense since each of $\delta_i$, $\kappa_i$ and $\lambda_i$ is a subset of $Z(n)\times Z(n)$.
Clearly, for any $x\neq y\in Z(n)$, $\pair x y\in \alpha\shd_i$ if and only $\pair x y\in\kappa_i\cup\lambda_i\subseteq U^2\cup W^2$. This fact together with $\kappa_i\cap\lambda_i\subseteq U^2\cap W^2=\emptyset$ and \eqref{eqanTiChaIn} yield that
for any $i,j\in\set{1,2,\dots,m}$,
\begin{equation}
\text{if $i\neq j$, then $\alpha \shd_i$ and $\alpha \shd_j$ are incomparable.}
\label{pbxTfhhsslqkntLXn}
\end{equation}
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig4}}
\caption{``zigzagged circles''
\label{figd4}}
\end{figure}
Next, we define
\begin{equation}
\text{the ``\emph{zigzagged circle}'' }\,\,
\tuple{d_0,d_1,\dots, d_{n-1}}
\label{eqtxtZgZghsvTjs}
\end{equation}
as follows; see also the thick edges and curves in Figure~\ref{figd4}.
(Note that the earlier meaning of the notation $d_0, d_1,\dots$ is no longer valid.)
For $i\in\set{0,\dots,k-1}$, we let $d_{2i}:=a_i$ and $d_{2i+1}=b_i$. We let $d_{2k}=a_k$ and, if $n=2k+2$ is even, then we let $d_{n-1}=c$. Two consecutive vertices of the zigzagged circle
will always be denoted by $d_p$ and $d_{p+1}$ where $p, p+1\in\set{0,1,\dots,n-1}$ and the addition is understood modulo $n$. The zigzagged circle has one or two \emph{thick curved edges}; they are $\pair{a_0}{a_k}$ for $n=2k+1$ odd and they are $\pair{a_0}c$ and $\pair{c}{a_k}$ for $n=2k+2$ even; the rest of its edges are \emph{straight thick edges}. So the zigzagged circle consist of the thick (straight and curved) edges, whereby the adjective ``thick'' will often be dropped.
Next, we define some lattice terms associated with the edges of the zigzagged circle. Namely,
for $j\in\set{1,\dots,m}$ and for
$p\in \set{0,1,\dots,2k-1}$, we define the quaternary term
\begin{equation}
\begin{aligned}
\gterm j {d_p} {d_{p+1}}(\obmu):=
\fterm {d_p} {d_{p+1}}(\obmu)
&\cdot \prod_{\pair {d_p}x \in \alpha\shd_j} \bigl( \oal\ode + \fterm x{d_{p+1}}(\obmu) \bigr)
\cr
&\cdot
\prod_{\pair y{d_{p+1}}\in \alpha\shd_j} \bigl(\fterm {d_{p}}y(\obmu) + \oal\ode \bigr).
\end{aligned}
\label{eqcBmTzXQfFt}
\end{equation}
The assumption on $p$ means that \eqref{eqcBmTzXQfFt} defines $\gterm j {d_p} {d_{p+1}}(\obmu)$ for each straight edge of the zigzagged circle.
We claim that for all $j\in\set{1,\dots,m}$ and $p\in \set{0,1,\dots,2k-1}$,
\begin{equation}
\gterm j {d_p} {d_{p+1}}(\alpha_j,\beta_j,\gamma_j,\shd_j)=
\equ {d_p} {d_{p+1}}.
\label{eqCshTnGbW}
\end{equation}
In order to show \eqref{eqCshTnGbW}, observe that
$\beta_i\shd_i=\beta_i\delta_i=\equ{a_0}{b_0}$ and
$\gamma_i\shd_i=\gamma_i\delta_i=\equ{a_k}{b_{k-1}}$. These equalities, \eqref{eqpbxZbntGhSwD}, \eqref{eqnZhgjdlSrkhllRw}, and \eqref{eqfTmrzGb} yield that for any $u,v\in Z(n)$, $i\in\set{1,\dots,m}$, and $p\in\set{0,1,\dots,2k-1}$,
\begin{align}
\fterm {u} {v}(\alpha_i,\beta_i,\gamma_i,\shd_i)&=\equ {u} {v}\,\,\text{ and, in particular,} \label{eqalignZrtbvRsrst}
\\
\fterm {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)&=\equ {d_p} {d_{p+1}}.
\label{eqcHtRbkvWp}
\end{align}
Combining \eqref{eqcBmTzXQfFt} and \eqref{eqcHtRbkvWp},
we obtain the ``$\leq$'' part of \eqref{eqCshTnGbW}. In order to turn this inequality to an equality, we have to show that the pair
$\pair{d_p}{d_{p+1}}$ belongs to $\alpha\shd_j+\fterm x{d_{p+1}} (\alpha_j,\beta_j,\gamma_j,\shd_j ) $ for every $\pair {d_p}x \in \alpha\shd_j$, and it also belongs to
$\fterm {d_{p}}y(\alpha_j,\beta_j,\gamma_j,\shd_j) + \alpha\shd_j$ for every $\pair y{d_{p+1}}\in \alpha\shd_j$.
But this is trivial since $\pair x{d_{p+1}}\in \fterm x{d_{p+1}} (\alpha_j,\beta_j,\gamma_j,\shd_j )$ in the first case by \eqref{eqalignZrtbvRsrst}, and similarly trivial in the second case.
We have shown \eqref{eqCshTnGbW}.
Next, we claim that for any $i,j\in\set{1,\dots,m}$,
\begin{equation}\left.
\parbox{8cm}{if $i\neq j$, then there exists a $p\in
\set{0,1,\dots, 2k-1}$ such that
$\gterm j {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)=\enul:=0_{\Equ{Z(n)}}$.
}\,\,\right\}
\label{eqpbxZhgRblKnGgvnD}
\end{equation}
In order to prove \eqref{eqpbxZhgRblKnGgvnD}, assume that $i\neq j$. For an equivalence $\epsilon\in \Equ{Z(n)}$ and $x\in Z(n)$, the \emph{$\epsilon$-block} $\set{y\in Z(n): \pair x y\in \epsilon}$ of $x$ will be denoted by $x/\epsilon$.
We know from \eqref{pbxTfhhsslqkntLXn} that $\alpha\shd_j\not\leq\alpha\shd_i$. Hence, there is an element $x\in Z(n)$ such that $x/(\alpha\shd_j)\not\subseteq x/(\alpha\shd_i)$. Since $c/(\alpha\shd_j)=\set c= c/(\alpha\shd_i)$ for $n$ even, $x$ is distinct from $c$. Hence, $x$ is one of the endpoints of a straight edge $\pair{d_p}{d_{p+1}}$ of the zigzagged circle. This is how we can select a $p\in\set{0,1,\dots, 2k-1}$, that is,
a straight edge $\pair{d_p}{d_{p+1}}$ of the zigzagged circle \eqref{eqtxtZgZghsvTjs} such that
\begin{equation}
d_p/(\alpha\shd_j)\not\subseteq d_p/(\alpha\shd_i) \quad \text{ or }\quad
d_{p+1}/(\alpha\shd_j) \not\subseteq d_{p+1}/(\alpha\shd_i).
\label{eqchSdjgThRgRTrnL}
\end{equation}
Now, we are going to show that this $p$ satisfies the requirement of \eqref{eqpbxZhgRblKnGgvnD}. We can assume that the first part of the disjunction given in \eqref{eqchSdjgThRgRTrnL} holds, because the treatment for the second half is very similar. Pick an element
\begin{equation}
z\in d_p/(\alpha\shd_j)\text{ such that }z\notin d_p/(\alpha\shd_i).
\label{eqChClmTnsz}
\end{equation}
Because of \eqref{eqcHtRbkvWp} and the first meetand in \eqref{eqcBmTzXQfFt},
\begin{equation}
\gterm j {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)\leq \equ {d_p} {d_{p+1}}.
\label{eqnMsTsnWRStbHws}
\end{equation}
We claim that
\begin{equation}
\pair{d_p} {d_{p+1}}\notin \gterm j {d_p} {d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i).
\label{eqhzSzNtlNsrLlgz}
\end{equation}
Suppose the contrary. Then, using \eqref{eqcBmTzXQfFt} and that
$\pair {d_p}z\in\alpha\shd_j$ by \eqref{eqChClmTnsz}, we have that
\begin{equation}
\pair{d_p} {d_{p+1}} \in \alpha\shd_i + \fterm z{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)
\eeqref{eqalignZrtbvRsrst}
\alpha\shd_i + \equ z{d_{p+1}}.
\label{eqghzrBsjTvnScsz}
\end{equation}
According to \eqref{eqghzrBsjTvnScsz}, there exists a \emph{shortest} sequence $u_0=d_{p+1}$, $u_1$, \dots, $u_{q-1}$, $u_q=d_p$ such that for every $\ell\in\set{0,1,\dots, q-1}$,
either $\pair{u_\ell}{u_{\ell+1}}\in \alpha\shd_i$, which is called a \emph{horizontal step}, or $\pair{u_\ell}{u_{\ell+1}}\in \equ z{d_{p+1}}$, which is a \emph{non-horizontal step}.
There is at least one non-horizontal steps since $d_p$ and $d_{p+1}$ are in distinct $\alpha$-blocks.
A non-horizontal step means that $\set{u_\ell,u_{\ell+1}} =\set{z,d_{p+1}}$, so $\set{z,d_{p+1}}$ is the only ``passageway'' between the two nonsingleton $\alpha$-blocks. Hence, there exists exactly one non-horizontal step since our sequence is repetition-free. This step is the first step since we have taken a shortest sequence. Hence,
$u_1=z$ and all the subsequent steps are horizontal steps.
Hence, $\pair {z}{d_p}=\pair {u_1}{d_p}\in\alpha\shd_i$.
Thus, $z\in d_p/(\alpha\shd_i)$, contradicting the choice of $z$ in \eqref{eqChClmTnsz}. This contradiction yields \eqref{eqhzSzNtlNsrLlgz}. Finally, \eqref{eqhzSzNtlNsrLlgz} together with \eqref{eqnMsTsnWRStbHws} imply \eqref{eqpbxZhgRblKnGgvnD}.
Next, for $j\in\set{1,2,...,m}$ and $q\in\set{0,1,\dots, n-1}$, we define the following quaternary term
\begin{equation}\left.
\begin{aligned}
\hterm j {d_q}{d_{q+1}}(\obmu):=
\cr\fterm {d_q}{d_{q+1}}(\obmu)
&\cdot
\prod_{p=0}^{2k-1}\bigl(\fterm {d_q}{d_p}(\obmu) + \gterm j{d_p}{d_{p+1}}(\obmu) + \fterm {d_{p+1}}{d_{q+1}}(\obmu)\bigr)\cr
&\cdot
\prod_{p=0}^{2k-1}\bigl(\fterm {d_q}{d_{p+1}}(\obmu) + \gterm j{d_p}{d_{p+1}}(\obmu) + \fterm {d_{p}}{d_{q+1}}(\obmu)\bigr),
\end{aligned}
\,\,\,\right\}
\label{eqttzsnJsjPl}
\end{equation}
where $q+1$ in subscript position is understood modulo $n$.
We claim that, for $q\in\set{0,1,\dots, n-1}$ and $i,j\in\set{1,\dots, m}$,
\begin{equation}
\hterm j {d_q}{d_{q+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)=
\begin{cases}
\equ {d_q}{d_{q+1}},&\text{if }\,\, i=j,\cr
\enul=0_{\Equ{Z(n)}},&\text{if }\,\, i\neq j.
\end{cases}
\label{eqHgtqrmBnSkWrkK}
\end{equation}
In virtue of \eqref{eqCshTnGbW}, \eqref{eqalignZrtbvRsrst}, and \eqref{eqcHtRbkvWp}, the validity of \eqref{eqHgtqrmBnSkWrkK} is clear when $i=j$. So, to prove \eqref{eqHgtqrmBnSkWrkK}, we can assume that $i\neq j$.
Since
$\hterm j {d_q}{d_{q+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)\leq \equ {d_q}{d_{q+1}}$ by \eqref{eqalignZrtbvRsrst} and \eqref{eqcHtRbkvWp}, it suffices to show that
$\pair{d_q}{d_{q+1}} \notin \hterm j {d_q}{d_{q+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)$. Suppose the contrary. Then we obtain from \eqref{eqalignZrtbvRsrst} and \eqref{eqttzsnJsjPl} that for
all $p\in\set{0,\dots,2k-1}$,
\begin{align}
\pair{d_q}{d_{q+1}} &\in \equ{d_q}{d_p} + \gterm j{d_p}{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)
+
\equ{d_{p+1}}{d_{q+1}}\text{ and}
\label{eqlgnbmTzskWrNztp}\\
\pair{d_q}{d_{q+1}} &\in \equ{d_q}{d_{p+1}} + \gterm j{d_p}{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i) + \equ {d_{p}}{d_{q+1}}.
\label{fzhNtbmTzrTxT}
\end{align}
Now we choose $p$ according to \eqref{eqpbxZhgRblKnGgvnD}; then $\gterm j{d_p}{d_{p+1}}(\alpha_i,\beta_i,\gamma_i,\shd_i)$ can be omitted from \eqref{eqlgnbmTzskWrNztp} and \eqref{fzhNtbmTzrTxT}. Therefore, if $p=q$, then \eqref{eqlgnbmTzskWrNztp} asserts that $\pair{d_q}{d_{q+1}}\in\enul$, a contradiction. Note that, due to $n\geq 5$, $p=q$ is equivalent to $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=2$, whence $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=2$ has just been excluded.
If $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=4$, then each of \eqref{eqlgnbmTzskWrNztp} and \eqref{fzhNtbmTzrTxT} gives a contradiction again. If $|\set{d_p,d_{p+1},d_q,d_{q+1}}|=3$, then exactly one of \eqref{eqlgnbmTzskWrNztp} and \eqref{fzhNtbmTzrTxT} gives a contradiction. Hence, no matter how $p$ and $q$ are related, we obtain a contradiction. This proves the $i\neq j$ part of \eqref{eqHgtqrmBnSkWrkK}. Thus, \eqref{eqHgtqrmBnSkWrkK} has been proved.
Finally, let $K:=\sublat{\valpha,\vbeta,\vgamma,\vshd}$; it is a sublattice of $\Equ{Z(n)}^m$ and we are going to show that
$K=\Equ{Z(n)}^m$. Let $j\in\set{1,\dots,m}$.
It follows from \eqref{eqHgtqrmBnSkWrkK} that
\begin{equation}
\tuple{
\enul,\dots,\enul,
\underbrace{\equ {d_q} {d_{q+1}}}
_{ j\text{-th entry} }
,\enul,\dots,\enul }
\in K,\text{ for all }q\in\set{0,\dots,n-1}.
\label{eqmsTlgYkZsWsG}
\end{equation}
Since the sublattice
\begin{equation*}
S_j:=\set{\enul}\times\dots\times \set{\enul}\times\Equ{Z(n)}\times \set{\enul}\times\dots\times \set{\enul}
\end{equation*}
with the non-singleton factor at the $j$-th place
is isomorphic to $Z(n)$, it follows from \eqref{eqmsTlgYkZsWsG} and Lemma~\ref{lemmaHamilt} that $S_j\subseteq K$, for all $j\in\set{1,\dots,m}$. Therefore, since every element of $\Equ{Z(n)}^m$ is of the form $s^{(1)}+ s^{(2)} + \dots + s^{(m)}$ with $s^{(1)}\in S_1$, \dots, $s^{(m)}\in S_m$, we obtain that $\Equ{Z(n)}^m \subseteq K$. Consequently,
$\Equ{Z(n)}^m =K = \sublat{\alpha,\beta,\gamma,\shd}$ is a four-generated lattice, as required.
The proof of Theorem~\ref{thmmain} is complete.
\end{proof}
\section{$(1+1+2)$-generation}\label{sectootwo}
By a $(1+1+2)$-generating set or, in other words, \emph{a generating subset of order type $1+1+2$}
we mean a four element generating set such that exactly two of the four elements are comparable.
Lattices having such a generating set are called \emph{$(1+1+2)$-generated.}
In his paper, Z\'adori~\cite{zadori} proved that for every integer $n\geq 7$, the partition lattice $\Part n$ is $(1+1+2)$-generated. In this way, he improved the result proved by Strietz~\cite{strietz2} from $\set{n: n\geq 10}$ to $\set{n: n\geq 7}$. In this section, we generalize this result to direct powers by the following theorem;
\eqref{pbxPartnmM} and \eqref{eqtxtmMxSdfLm} are still in effect.
\begin{theorem}\label{thmoot} Let $n\geq 7$ be an integer, let
$k:=\lfloor (n-1)/2 \rfloor$, and let
\begin{equation}
\csm=\csm(n):=
\max\left(\,\, \lfloor (k-1)/2 \rfloor,\,\, \maxs(\lfloor (k-1)/2 \rfloor)^2 \,\, \right).
\label{eqmSrmd nmrpRd}
\end{equation}
Then $\Part n^\csm$ or, equivalently, $\Equ n^\csm$ is $(1+1+2)$-generated. In other words, the $\csm$-th direct power of the lattice of all partitions of the set $\set{1,2,\dots, n}$ is has a generating subset of order type $1+1+2$.
\end{theorem}
Note that $\csm$ above is at least $1$, and $\csm \geq 2$ if and only if $n\geq 11$.
\begin{figure}[htb]
\centerline
{\includegraphics[scale=1.0]{czgauthfig5}}
\caption{$Z(2k+2)$ for $k=23$
\label{figd5}}
\end{figure}
\begin{proof}
With our earlier conventions, we define $\alpha$, $\beta$, and $\gamma$ as in Sections~\ref{sectztrms} and \ref{sectprod}, see also \eqref{eqtxtngkSmfmRstsRz}, but we let $\delta:=\equ{a_0}{a_k}+\equ{b_0}{b_{k-1}}\in\Equ{Z(n)}$. For $n=47$, this is illustrated by Figure~\ref{figd5} if we omit vertex $c$. For $n=48$, Figure~\ref{figd5} is a faithful illustration without omitting anything but taking \eqref{eqtxtsdhClrsznzlD} into account. Instead of working with
$U$ and $W$ from \eqref{eqwmghUV}, we define these two sets as follows.
\begin{align}
U&:=\set{a_i: 1\leq i\leq k-2\text{ and }i\text{ is odd}}\text{ and}\label{eqhrTdfzgu}
\\
W&:=\set{b_i: 1\leq i\leq k-2\text{ and }i\text{ is odd}}.
\label{eqhrTdfzgv}
\end{align}
In Figure~\ref{figd5}, $U\cup W$ is the set of black-filled elements. Let $r:=\lfloor (k-1)/2 \rfloor$; note that $r=|U|=|W|$. Since $n\geq 7$, we have that $k\geq 3$ and $r\geq 1$.
Let $\bdelta$ be the equivalence on $Z(n)$ generated by
$\delta\cup U^2\cup W^2$. In other words, $\bdelta$ is the equivalence with blocks $\set{a_0,a_k,b_0,b_{k-1}}$, $U$, and $W$such that the rest of its blocks are singletons.
Let
\[t:=2\cdot\lfloor(k+1)/2\rfloor -3\,\,\text{ and }\,\,T:=\set{i: 1\leq i\leq t \text{ and }i\text{ is odd} }.
\]
Based on Figure~\ref{figd5}, we can think of $T$ as the set of subscripts of the black-filled elements.
The blocks of $\gamma+\bdelta$ are the following:
\begin{align*}
&\set{a_0,a_k,b_0,b_{k-1}}\cup \set{a_i: i\in T}\cup\set{b_{i-1}:i\in T},\cr
&\set{a_{i+1}:i\in T}\cup\set{b_{i}:i\in T},\text{ and, if }k\text{ is even, }\set{a_{k-1},b_{k-2}},
\end{align*}
and, for $n$ even, $\set{c}$.
Hence, we obtain that
\begin{equation}
\equ{a_0}{b_0} =\beta(\gamma+\bdelta).
\label{eqrzhGrnszGsFpL}
\end{equation}
Similarly, the blocks of $\beta+\bdelta$ are $\set{c}$, if $n$ is even, and the following:
\begin{align*}
&\set{a_{i}:i\in T}\cup\set{b_{i}:i\in T},\,\,\,\set{a_0,a_k,b_0,b_{k-1}, a_{k-1}}.\cr
&\text{and, for }i\in\set{2,3,\dots,k-2}\setminus T,\,\,\,\,\set{a_i,b_i}.
\end{align*}
Hence, it follows that
\begin{equation}
\equ{a_k}{b_{k-1}} =\gamma(\beta+\bdelta).
\label{eqnhsptdkjrTSz}
\end{equation}
In the proof of Theorem~\ref{thmmain}, based on \eqref{eqwmghUV}, $\aGa$, $\aHa$, \eqref{eqanTiChaIn}, and \eqref{eqchzTnFshRp}, we defined the equivalences $\shd_1$,\dots,$\shd_m$ in \eqref{eqctznBkPrDTrMp}. Now we define $\shd_1$,\dots,$\shd_\csm$ exactly in the same way but we use \eqref{eqhrTdfzgu} and \eqref{eqhrTdfzgv} instead of \eqref{eqwmghUV}, and we take into account that $U$ and $W$ are now smaller and we obtain $\csm$ rather than $m$ from them. Observe that
\begin{equation}
\equ{a_0}{b_0} =\beta(\gamma+\delta)\quad\text{ and }\quad
\equ{a_k}{b_{k-1}} = \gamma(\beta+\delta).
\label{eqezrhBsmjRtMcLc}
\end{equation}
Since $\delta\leq \shd_j\leq \bdelta$ for $j\in\set{1,\dots,\csm}$, it follows from \eqref{eqrzhGrnszGsFpL}, \eqref{eqnhsptdkjrTSz}, and \eqref{eqezrhBsmjRtMcLc} that
\begin{equation}
\equ{a_0}{b_0} =\beta(\gamma+\shd_j)\quad\text{ and }\quad
\equ{a_k}{b_{k-1}} = \gamma(\beta+\shd_j)
\label{eqwzmbHhGRdffkT}
\end{equation}
for all $j\in\set{1,\dots,\csm}$.
Armed \eqref{eqwzmbHhGRdffkT}
and all the previous preparations, the rest of the proof is the same as in case of Theorem~\ref{thmmain} unless $r=2$; these details are not repeated here.
Observe that the only role of $m$ in the proof of Theorem~\ref{thmmain} is that we had to find an $m$-element antichain in
$\Equ U\times \Equ W$. Similarly, if $r\neq 2$, then all what we have to do with $\csm$ is to find an $\csm$-element antichain in $\Equ U\times \Equ W$. If $r\neq 2$, then we obtain such an antichain as the Cartesian product of an antichain of $\Equ U$ and that of $\Equ W$. If $r=2$, then this method does not work since $\Equ U$ and $\Equ W$ are (two-element) chains but $\csm=2$. However, $\Equ U\times \Equ W$ has an $\csm=2$-element antichain even in this case. This completes the proof of Theorem~\ref{thmoot}.
\end{proof}
Obviously, Theorem~\ref{thmoot} implies the following counterpart of Corollary~\ref{coroLprd}.
\begin{corollary}\label{corodjTd}
Let $n$ and $\csm$ be as in Theorem \ref{thmoot}. Then for every integer $t$ with $1\leq t\leq \csm$, the direct power $\Part n^t$ is $(1+1+2)$-generated.
\end{corollary}
\section{Authentication and secret key cryptography with lattices}\label{sectauth}
While lattice theory is rich with involved constructs and proofs, it seems not to have many, if any, applications in information theory. The purpose of this section is to suggest a protocol primarily for \emph{authentication}; it is also good for \emph{secret key cryptography}, and it could be appropriate for a \emph{commitment protocol}.
Assume that during the authentication protocol that we are going to outline, \aph{}\footnote{\emph{\aph} is the Hungarian version of Andrew;
as a famous lattice theorist with this first name, I mention my scientific advisor, \aph{} P.\ Huhn (1947--1985).} intends to prove his identity to his Bank and conversely; online, of course. In order to do so, \aph{} and the Bank should find a lattice $L$ with the following properties:
\begin{itemize}
\item $|L|$ is large,
\item $L$ has a complicated structure,
\item the length of $L$ is small (that is, all maximal chains of $L$ are small),
\item every non-zero element of $L$ has lots of lover covers and dually,
\item $L$ can be given by and constructed easily from little data,
\item and $L$ is generated by few elements.
\end{itemize}
The first four properties are to make the Adversary's task difficult (and practically impossible) while the rest of these properties ensure that \aph{} and the Bank can handle $L$. It is not necessary that $L$ has anything to do with partitions, but partitions lattices and their direct powers seem to be good choices. Partition lattices are quite complicated since every finite lattice can be embedded into a finite partition lattice by Pudl\'ak and T\r uma~\cite{pudlaktuma}. Also, they are large lattices described by very little data. For example, we can take
\begin{align}
L&=\Part{273},\text{ its size is }|\Part{273}|\approx 3.35\cdot 10^{404},\text{ or}\label{eqZhgRtBfktszg}
\\
L&=\Part{12}^{61},\text{ its size is }|\Part{12}^{61}|\approx 1.27\cdot 10^{404}.
\label{eqdmzmgrYjHBbsgnwsztRtnk}
\end{align}
Although these two lattices seem to be similar in several aspects, each of them has some advantage over the other.
As opposed to $\Part{273}$,
\begin{equation}
\parbox{7.7cm}{joins can easily be computed componentwise in $\Part{12}^{61}$ if parallel computation is allowed.}
\end{equation}
On the other hand, using that $\Part{273}$ is a semimodular lattice and so any two of its maximal chains have the same length, it is easy to see that the longest chain in $\Part{273}$
is only of length 272 (that is, this chain consists of 273 element). Using semimodularity again, it follows easily that the longest chain in $\Part{12}^{61}$ is of length $61\cdot 11=671$, so $\Part{273}$ seems to be more advantageous in this aspect. Based on data obtained by computer, to be presented in tables \eqref{tableD8gens} and \eqref{tablenFpgFns}, we guess that $\Part{273}$ has more $p$-element generating sets of an \emph{unknown pattern} than $\Part{12}^{61}$. If so, then this can also be an advantage of $\Part{273}$
since a greater variety of $p$-element generating sets of unknown patterns makes the Adversary's task even more hopeless. It is probably too early to weigh all the pros and cons of \eqref{eqZhgRtBfktszg}, \eqref{eqdmzmgrYjHBbsgnwsztRtnk} and, say, $\Part{113}^{3}$ with size $|\Part{113}|^{3}\approx 1.51\cdot 10^{405}$.
\aph{} and the Bank choose two small integer parameters $p,q\geq 4$, the suggested value is $p=q=8$ or larger; these numbers can be public.
Also, \aph{} and the Bank agree upon a $p$-tuple
\begin{equation}
\ves=\tuple{s_1,s_2,\dots,s_{p}} \in L^{p}.
\label{eqphndThbnhnSpLSr}
\end{equation}
This $\vec s$ is the common authentication code for \aph{} and the Bank; only they know it and they keep it in secret. So far, the role of $\ves$ is that of the PIN (personal identification number) of a bank card.
Every time \aph{} intends to send an authenticated message to the Bank, the Bank selects a vector $\vew=\tuple{w_1,w_2,\dots, w_q}$ of long and complicated $p$-ary lattice terms
randomly. (We are going to discuss after \eqref{eqmnTltjrm} how to select $\vew$.)
Then the Bank sends $\vew$ to \aph{}.
(If \aph{} thinks that $\vew$ is not complicated enough, then he is allowed to ask for a more complicated $\vew$ repeatedly until he is satisfied with $\vew$.) Then, to prove his identity, \aph{} sends
\begin{equation}
\vew(\ves):=\tuple{w_1(s_1,\dots,s_p),\dots, w_q(s_1,\dots, s_p) }
\label{eqszhnKhKypNxGqr}
\end{equation}
to the Bank. (Preferably, in the same message that instructs the Bank to do something like transferring money, etc.) The Bank also computes $\vew(\ves)$ and compares it with what \aph{} has sent; if they are equal then the Bank can be sure that he communicates with \aph{} rather than with an adversary. Note that it is easy and fast to compute $\vew(\ves)$ from $\vew$ and $\ves$.
Note also that, changing their roles, \aph{} can also verify (by another $q$-tuple $\vew'$ of terms) that he communicates with the Bank rather than with the Adversary.
The point of the protocol is that while $\ves$ can be used many times,
a new $\vew$ is chosen at each occasion. So even if the Adversary intercepts the communication, he cannot use the old values of $\vew(\ves)$. So the Adversary's only chance to interfere is to extract
the secret $\ves$ from $\ver:=\vew(\ves)$. However, extracting $\ves$ from $\ver:=\vew(\ves)$ and $\vew$ seems to be hard. (This problem is in NP and hopefully it is not in $P$.) The Adversary cannot test all possible $p$-tuples $\ves'\in L^p$ since there are astronomically many such tuples. The usual iteration technique to find a root of a function $\mathbb R^p\to \mathbb R$ is not applicable here since, in general,
\begin{equation}
\text{it is unlikely that two elements of $L$ are comparable,}
\label{eqtxtnmctKzSszbwhnT}
\end{equation}
simply because the length of $L$ is small but $|L|$ is large. It is also unlikely that two members of $L^q$ are comparable.
If the Adversary begins parsing, say, $r_1:=w_1(\ves)$, then even the first step splits into several directions since $r_1\in L$ has many lower and upper covers and so there are many possibilities to represent it as the join of two elements (in case the outmost operation sign in $w_1$ is $\vee$) or as the meet of two elements (in case the outmost operation sign is $\wedge$).
Each of these several possibilities split into several cases at the next step, and this happens many times depending on the length of $w_1$. But $w_1$ is a long term, whence exponentially many sub-directions should be handled, which is not feasible.
Some caution is necessary when choosing the common secret authentication code $\ves$. This $\ves$ should be chosen so that $\sublat{s_1,s_2,\dots, s_p}=L$ or at least $\sublat{s_1,s_2,\dots, s_p}$ should be very large. One possibility to ensure that $\set{s_1,s_2,\dots, s_p}$ generates $L$ is to extend a four-element generating set from Sections~\ref{sectztrms}--\ref{sectootwo} to a $p$-element subset of $L$.
If $L=\Part{273}$, then one can pick a permutation $\tau$ of the set $\set{1,2,\dots,273}$; this $\tau$ induces an automorphism $\otau$ of $\Part{273}$ in the natural way, and
$\set{\otau(\alpha),\otau(\beta),\otau(\gamma),\otau(\delta)}$
with $\alpha,\dots, \delta$ from Section~\ref{sectztrms} is
a four-element generating set of $\Part{273}$.
If $L=\Part{12}^{61}$, then in addition to the permutations of $\set{1,2,\dots,12}$, allowing different permutations in the direct factors,
there are many ways to select a 61-element antichain as a subset of the 175-element maximum-sized antichain that occurs in \eqref{eqanTiChaIn}. (Note that we obtained this number, 175, when computing the last column of \eqref{tablerdDbsa}.) In both cases, \aph{} and the Bank can easily pick one of the astronomically many four-element generating sets described in the present paper. A four-element generating set can be extended to a $p$-element one in many ways. It would be even better to pick a $p$-element generating set of an unknown pattern, but it is not clear at this moment how this would be possible.
\aph{} and the Bank should also be careful when selecting a $q$-tuple $\vew=\tuple{w_1,\dots,w_q}$ of complicated $p$-ary lattice terms. They should avoid that, for $i\in\set{1,\dots,q}$, the outmost operation symbol in $w_i$ is $\wedge$ and $w_i(\ves)$ is meet irreducible (or it has only few upper covers), and dually, and similarly for most of the subterms of $w_i$.
In particular, $w_i(\ves)\in\set{0,1}$ should not happen.
To exemplify our ideas that come below, consider the (short) lattice term
\begin{equation}
x_4\Bigl(x_5+\Bigl(\bigl((x_1x_8+x_2x_3)\cdot(x_4x_5+x_3x_6)\bigr)+\bigl(x_2x_8+(x_3x_4)x_7\bigr)\Bigr)\Bigr);
\label{eqmnTltjrm}
\end{equation}
there are 15 \emph{occurrences} of variables in this term.
That is, if we represented this term by a binary tree in the usual way, then this three would have 15 leaves.
Now, to choose a random term $w_1$, we can begin with a randomly chosen variable. Then, we iterate the following, say, a thousand times: after picking an occurrence of a variable in the already constructed term randomly (we denote this occurrence by $x_i$), selecting two of the $p$ variables, and picking one of the two operations symbols, we replace $x_i$ by the meet or the join of the two variables selected, depending on which operations symbol has been picked.
When choosing the two variables and the operation symbol mentioned above, we can exclude that the replacement immediately ``cancels by the absorption laws''. (Or, at least, we have to be sure that this does not happen too often.)
For example, it seems to be reasonable to forbid that $x_6$ in \eqref{eqmnTltjrm} is replaced by $x_3+x_7$. Although we can choose the occurrence mentioned in the previous paragraph according to the even distribution, it can be advantageous to go after a distribution that takes the \emph{depths} of the occurrences into account somehow.
If $q=p$, which is recommended, then it is desirable that $\vew(\ves)$ should be far from $\ves$ and, in addition, each of the
$w_1(\ves)$, \dots, $w_q(\ves)$ should be far from each other, from $0=0_L$, $1$, and from $s_1$, \dots, $s_p$. By ``far'', we mean that the usual graph theoretical \emph{distance} in the Hasse diagram of $L$ or that of $L^q$ is larger than a constant. Hence, while developing $w_1$ randomly, one can monitor $w_1(\ves)$ and interfere into the random process from time to time if necessary.
If $L$ is from \eqref{eqZhgRtBfktszg} or \eqref{eqdmzmgrYjHBbsgnwsztRtnk}, then $L$ is a semimodular lattice, so any two maximal chains of $L$ consist of the same number of elements.
In this case, the above-mentioned distance of $x,y\in L$ can be computed quite easily; see for example Cz\'edli, Powers, and White~\cite[equation (1.8)]{czpw}. Namely, the distance of $x$ and $y$ is
\begin{equation}
\distance x y =\length([x,x+y])+\length([y,x+y]).
\label{eqdlanghshsRw}
\end{equation}
Since any two maximal chains of $\Equ n$ are of the same size, it follows easily that $\length([x,x+y])$ is the difference of the number of $x$-blocks and the number of $(x+y)$-blocks, and similarly for
$\length([x,x+y])$.
Several questions about the strategy remains open but future experiments with computer programs can lead to satisfactorily answers. However, even after obtaining good answers, the reliability of the above-described protocol would still remain the question of belief in some extent. This is not unexpected, since many modern cryptographic and similar protocols rely on the belief that certain problems, like factoring an integer or computing discrete logarithms, are hard.
Besides authentication, our method is also good for \emph{cryptography}. Assume that \aph{} and the Bank have previously agreed in $\ves$; see \eqref{eqphndThbnhnSpLSr}.
Then one of them can send a random $\vew$ to the other.
They can both compute $\vew(\ves)$, see \eqref{eqszhnKhKypNxGqr}, but the Adversary cannot since even if he intercepts $\vew$, he does not know $\ves$. Hence, \aph{} and the Bank can use $\vew(\ves)$ as the secret key of a classical cryptosystem like Vernam's. Such a secret key cannot be used repeatedly many times but \aph{} and the Bank can select a new $\vew$ and can get a new key $\vew(\ves)$ as often as they wish.
Next, we conjecture that \aph{} can lock a \emph{commitment} $\ves$ by making $\vew(\ves)$ public.
To be more precise, the protocol is that there is a Verifier who chooses $\vew$, and then \aph{} computes $\ver=\vew(\ves)$
with the Verifier's $\vew$ and makes this $\ver$ public.
From that moment, \aph{} cannot change his commitment $\ves$, nobody knows what this $\ves$ is, but armed with $\vew$ and $\ver$, everybody can check \aph{} when he reveals $\ves$. Possibly, some stipulations should be tailored to $\ves$ and $\vew$ in this situation.
\begin{equation}
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$\,4$&&$\,5$&&$\,6$&&$\,7$&&$8$&&$9$&
\cr\vonal\vonal
&& \hfill $|\Part n|$ &&$15$&&$52$&&$203$&&$877$&&$4\,140$&&$21\,147$&
\cr\vonal
&&\hfill$|\forall$8-sets$|$&&$6435$&&$7.53 \cdot 10^8$&&$6.22\cdot 10^{13}$&&$8.41\cdot 10^{18}$&&$2.13\cdot 10^{24}$&&$9.91\cdot 10^{29}$&
\cr\vonal
&&\hfill$|$tested$|$&&$100\,000$&&$10\,000$&&$10\,000$&&$6000$&&$1000$&&$284$&
\cr\vonal
&&\hfill$|$found$|$&&$89\,780$&&$7\,690$&&$7913$&&$5044$&&$848$&&$248$&
\cr
\vonal
&&\hfill \% &&$89.78$&&$76.90$&&$79.13$&&$84.01$&&$84.80$&&$90.19$&
\cr
\vonal\vonal\vonal\vonal
}}
\label{tableD8gens}
\end{equation}
Finally, we have developed and used a computer program to see if there are sufficiently many $8$-element generating subsets and $n$-element generating sets of $\Part n$.
This program, written in Bloodshed Dev-Pascal
v1.9.2 (Freepascal) under Windows 10 and partially in
Maple V. Release 5 (1997), is available from the author's website; see the list of publications there.
The results obtained with the help of this program are reported in Tables~\ref{tableD8gens} and \ref{tablenFpgFns}. The first, \dots, sixth rows in Tables~\ref{tableD8gens} give
the size $n$ of the base set,
the size of $\Part n$,
the number of 8-element subsets of $\Part n$,
the number of randomly selected 8-element subsets,
the number of those selected 8-element subsets that generate $\Part n$,
and the percentage of these generating 8-element subsets with respect to the number of the selected 8-element subsets, respectively. These subsets were selected independently according to the uniform distribution; a subset could be selected more than once.
Table~\ref{tablenFpgFns} is practically the same but the $n$-element (rather than 8-element) subsets generating $\Part n$ are counted in it.
\begin{equation}
\lower 0.8 cm
\vbox{\tabskip=0pt\offinterlineskip
\halign{\strut#&\vrule#\tabskip=1pt plus 2pt&
#\hfill& \vrule\vrule\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule#&
\hfill#&\vrule\tabskip=0.1pt#&
#\hfill\vrule\vrule\cr
\vonal\vonal\vonal\vonal
&&\hfill$n$&&$\,4$&&$\,5$&&$\,6$&&$\,7$&&$8$&&$9$&
\cr\vonal\vonal
&& \hfill $|\Part n|$ &&$15$&&$52$&&$203$&&$877$&&$4\,140$&&$21\,147$&
\cr\vonal
&&\hfill$|\forall n$-sets$|$&&$1365$&&$2\,598\,960$&&$9.2\cdot 10^{10}$&&$7.73\cdot 10^{16}$&&$2.13\cdot 10^{24}$&&$2.33\cdot 10^{33}$&
\cr\vonal
&&\hfill$|$tested$|$&&$100\,000$&&$10\,000$&&$10\,000$&&$10000$&&$1000$&&$\phantom{f}$&
\cr\vonal
&&\hfill$|$found$|$&&$89\,780$&&$1430$&&$3918$&&$6811$&&$848$&&$\phantom{f}$&
\cr
\vonal
&&\hfill \% &&$89.78$&&$14.30$&&$39.18$&&$68.11$&&$84.80$&&$\phantom{f}$&
\cr
\vonal\vonal\vonal\vonal
}}
\label{tablenFpgFns}
\end{equation}
Computing the last column of Table~\ref{tableD8gens} took 73 hours for a desktop computer
with AMD Ryzen 7 2700X Eight-Core Processor 3.70 GHz; this explains that no more 8-element subsets have been tested for Table~\ref{tableD8gens} and the last column of Table~\ref{tablenFpgFns} is partly missing. After computing the columns for $n=4$ and $n=5$ in Tables~\ref{tableD8gens} and \ref{tablenFpgFns}, we expected that the number in the percentage row (the last row) would decrease as $n$ would decrease as $n$ grows. To our surprise, the opposite happened.
Based on these two tables, we guess that $p=n$ should be and even $p=8$ could be appropriate in the protocol if $n=273$ and
$L$ is taken from \eqref{eqZhgRtBfktszg}.
\subsection*{Chronology and comparison, added on \chronologydatum}
The first version of the present paper
was uploaded
to \texttt{https://arxiv.org/abs/2004.14509}
on April 29, 2020.
A related \emph{second paper} dealing with direct products rather than direct powers
was completed
and uploaded to
\texttt{http://arxiv.org/abs/2006.14139}
on June 25, 2020.
(This second paper pays no attention to authentication and cryptography.)
The present paper corrects few typos and minor imperfections but it is not significantly different from its April 29, 2020 version. Although a particular case of the second paper also tells something on four-generation of direct powers of finite partition lattices, the present paper, yielding larger exponents and paying attention to $(1+1+2)$-generation, tells more. For example, while the four-generability of $\Part {2020}^{10^{127}}$ is almost explicit in the second paper and the maximum we can extract from \emph{that} paper is approximately the four-generability
of $\Part {2020}^{10^{604}}$, the last column of Table \eqref{tablerdDbsg} in the \emph{present} paper guarantees a significantly larger exponent, $5.5194232\cdot10^{3893}$. (The corresponding value, $5.52\cdot10^{3893}$, from Table \eqref{tablerdDbsg} was obtained by rounding up.) | 0.002293 |
House Sitter in San Diego
We are busy professionals, but not too busy to care for your home. We will pick up mail, water plants, do light yard work, housekeeping, etc. We also have two kittens who are very clean and neat.yan Haggins Teacher and UC Management Professional to care for your home while our new home is being built
| 0.149738 |
>>?
- JakeR
- Dukeshire
- kamieniecki
- olygene
- ornc33
- CustomScoop
- Airlines11
- jerrycurl
- artman77
- BobbyK
- JustACarrier
- Guest Users: 613
Keeping an eye on the NBA and Seattle's efforts to get back into the game
Saturday, October 20th, 2007
Posted by Eric Williams @ 08:23:19 pm
Tracy McGrady leads the Rockets with 23 points, as Houston is manhandling the Sonics right now.
Delonte West has been one of the few bright spots for Seattle. He has 16 points on 7-for-8 shooting, and three assists. Kevin Durant has 11 points for Seattle.
Just like against L.A., Seattle appears overmatched inside against Houston. The Rockets have 20 points in the paint compared to Seattle's 10 points.
Seattle's running game has basically been taken away because they can't get any stops.
We'll see if Seattle can get into any kind of rhythm in the second half. | 0.000844 |
\begin{document}
\bibliographystyle{amsalpha}
\title[Waring decompositions for a polynomial vector]{On the number of Waring decompositions for a generic polynomial vector\\
}
\author[E. Angelini, F. Galuppi, M. Mella]{Elena Angelini, Francesco Galuppi, Massimiliano Mella}
\address[E.Angelini, F. Galuppi, M. Mella]{ Dipartimento di Matematica e Informatica\\ Universit\`a di Ferrara\\ Via Machiavelli 35\\ 44121 Ferrara, Italia}
\email{elena.angelini@unife.it, francesco.galuppi@unife.it, mll@unife.it}
\author[G. Ottaviani]{Giorgio Ottaviani}
\address[G. Ottaviani]{Dipartimento di Matematica e Informatica 'Ulisse Dini'\\
Universit\`{a} di Firenze \\ Viale Morgagni 67/A \\ Firenze , Italia}
\email{ottavian@math.unifi.it}
\begin{abstract} We prove that a general polynomial vector $(f_1, f_2, f_3)$ in three homogeneous variables of degrees $(3,3,4)$
has a unique Waring decomposition of rank 7. This is the first new case we are aware, and likely the last one, after five examples known since 19th century and
the binary case. We prove that there are no identifiable cases among pairs $(f_1, f_2)$ in three homogeneous variables
of degree $(a, a+1)$, unless $a=2$, and we give a lower bound on the number of decompositions. The new example was discovered with Numerical Algebraic Geometry, while its proof needs Nonabelian Apolarity.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:intr}
Let $f_1$, $f_2$ be two general quadratic forms in $n+1$ variables over $\C$.
A well known theorem, which goes back to Jacobi and Weierstrass, says that $f_1$, $f_2$ can be simultaneously diagonalized. More precisely there exist linear forms $l_0,\ldots, l_n$ and scalars $\lambda_0,\ldots, \lambda_n$ such that
\begin{equation}\label{eq:2quadrics}
\left\{\begin{array}{rcl}f_1&=&\sum_{i=0}^nl_i^2\\
\hspace{0.2cm} \\
f_2&=&\sum_{i=0}^n\lambda_il_i^2\end{array}\right.
\end{equation}
An important feature is that the forms $l_i$ are unique (up to order) and their equivalence class, up to multiple scalars, depend only on the pencil $\left< f_1, f_2\right>$, hence also $\lambda_i$
are uniquely determined after $f_1$, $f_2$ have been chosen in this order.
The canonical form (\ref{eq:2quadrics}) allows to write easily the
basic invariants of the pencil, like the discriminant which takes the form $\prod_{i<j}(\lambda_i-\lambda_j)^2$. We call (\ref{eq:2quadrics}) a (simultaneous) Waring decomposition of
the pair $(f_1, f_2)$. The pencil $(f_1,f_2)$ has a unique Waring decomposition with $n+1$ summands if and only if its discriminant does not vanish. In the tensor terminology, $(f_1, f_2)$ is {\it generically identifiable}.
We generalize now the decomposition (\ref{eq:2quadrics}) to $r$ general forms, even allowing different degrees. For symmetry reasons, it is convenient not to distinguish $f_1$ from the other $f_j$'s, so we will allow scalars $\lambda^j_i$ to the decomposition of each $f_j$, including $f_1$. To be precise, let $ f=(f_1, \ldots, f_r) $ be a vector of general homogeneous forms of degree $ a_1, \ldots, a_r $ in $n+1$ variables over the complex field $ \C $, i.e. $ f_i \in {\mathrm Sym}^{a_i} \C^{n+1} $ for all $ i \in \{1, \ldots, r\} $. Let assume that $ 2 \leq a_{1} \leq \ldots \leq a_{r} $.
\begin{defn}\label{simdec}
A \emph{Waring decomposition} of $ f=(f_1, \ldots, f_r) $ is given by
linear forms $ \ell_1, \ldots, \ell_k \in \mathbb{P}(\C^\vee) $ and scalars $ (\lambda_{1}^{j}, \ldots, \lambda_k^{j}) \in \mathbb{C}^{k}-\{\underline{0}\} $ with $ j \in \{1, \ldots, r\} $ such that
\begin{equation}\label{eq:dec}
f_{j} = \lambda_{1}^{j}\ell_{1}^{a_{j}}+ \ldots + \lambda_{k}^{j}\ell_{k}^{a_{j}}
\end{equation}
for all $ j \in \{1, \ldots, r\} $
or in vector notation
\begin{equation}\label{eq:simwaring}
f=\sum_{i=1}^k\left(\lambda_{i}^{1}\ell_{i}^{a_{1}},\ldots, \lambda_{i}^{r}\ell_{i}^{a_{r}}\right)
\end{equation}
The geometric argument in \S \ref{subsec:projbundle} shows that every $f$ has a Waring decomposition. We consider two Waring decompositions of $f$ as in (\ref{eq:simwaring}) being equal if they differ just by the order of the $k$ summands. The {\it rank} of
$f$ is the minimum number $k$ of summands appearing in (\ref{eq:simwaring}), this definition coincides with the classical one
in the case $r=1$ (the vector $f$ given by a single polynomial).
\end{defn}
Due to the presence of the scalars $\lambda^j_i$, each form $\ell_{i}$ depends essentially only on $n$ conditions. So the decomposition (\ref{eq:dec}) may be thought as a nonlinear system with $\sum_{i=1}^r{{a_i+n}\choose n}$ data (given by $f_{j}$) and $k(r+n)$ unknowns (given by $kr$ scalars $\lambda^j_i$ and $k$ forms $\ell_{i}$). This is a very classical subject, see for example \cite{Re, Lon, Ro, Sco, Te2}, although in most of classical papers the degrees $a_i$ were assumed equal, with the notable exception of \cite{Ro}.
\begin{defn}\label{d:perfectcases}
Let $ a_1,\ldots, a_r, n$ be as above.
\noindent The space $ {\mathrm Sym}^{a_1} \C^{n+1}\oplus\ldots\oplus {\mathrm Sym}^{a_r} \C^{n+1}$
is called \emph{perfect} if there exists $k$ such that
\begin{equation}\label{eq:perfect}
\sum_{i=1}^r{{a_i+n}\choose n} = k(r+n)
\end{equation}
i.e. when (\ref{eq:dec}) corresponds to a square polynomial system.
\end{defn}
The arithmetic condition (\ref{eq:perfect}) means that $\sum_{i=1}^r{{a_i+n}\choose n}$ is divisible by $(r+n)$, in other terms
the number of summands $k$ in the system (\ref{eq:dec}) is uniquely determined.
The case with two quadratic forms described in (\ref{eq:2quadrics}) corresponds to $r=2$, $a_1=a_2=2$, $k=n+1$ and it is perfect. The perfect cases are important because, by the above dimensional count, we expect finitely many Waring decompositions for the generic polynomial vector
in a perfect space $ {\mathrm Sym}^{a_1} \C^{n+1}\oplus\ldots\oplus {\mathrm Sym}^{a_r} \C^{n+1}$.
It may happen that general elements in perfect spaces have no decompositions with the expected
number $k$ of summands, the first example, beside the one of plane
conics, was found by Clebsch in the XIXth century
and regards ternary quartics, where $r=1$, $a_1=4$ and $n=2$. Equation (\ref{eq:perfect}) gives $k=5$ but
in this case the system (\ref{eq:dec}) has no solutions and indeed $6$ summands are needed to find a Waring decomposition of the general ternary quartic.
It is well known that all the perfect cases with $r=1$ when the system (\ref{eq:dec}) has no solutions have been determined by Alexander and Hirschowitz,
while more cases for $r\ge 2$ have been found in \cite{CaCh}, where a collection of classical and modern interesting examples is listed.
Still, perfectness is a necessary condition to have finitely many Waring decompositions.
So two natural questions, of increasing difficulty, arise.
\vskip 0.4cm
{\bf Question 1} Are there other perfect cases for $a_1,\ldots, a_r, n$, beyond (\ref{eq:2quadrics}), where a unique Waring decomposition
(\ref{eq:simwaring}) exists for generic $f$, namely where we have generic identifiability ?
\vskip 0.4cm
{\bf Question 2} Compute the number of Waring decompositions (up to order of summands) for a generic $f$ in any perfect case.
\vskip 0.4cm
The above two questions are probably quite difficult, but we feel it is worth to state them as guiding problems.
These two questions are open even in the case $r=1$ of a single polynomial. In case $r=1$, Question 1 has a conjectural answer due to the third author,
who proved many cases of this Conjecture in \cite{Me, Me1}. The birational technique used in these papers has been generalized to our setting in \S \ref{sec:ternary} of this paper.
Always in case $r=1$, some number of decompositions for small $a_1$ and $n$ have been computed (with high probability) in \cite{HOOS} by homotopy continuation techniques, with the numerical software Bertini \cite{Be}.
In this paper we contribute to the above two questions. Before stating our conclusions, we still need to expose other known results on this topic.
In the case $n=1$ (binary forms) there is a result by Ciliberto and Russo \cite{CR}
which completely answers our Question 1.
\begin{thm}[Ciliberto-Russo]\label{thm:binforms0}
Let $n=1$. In all the perfect cases there is a unique Waring decomposition
for generic $f\in {\mathrm Sym}^{a_1}\C^2\oplus\ldots\oplus {\mathrm Sym}^{a_r}\C^2$ if and only if
$ a_{1}+1 \geq \frac{\sum_{i=1}^r(a_i+1)}{r+1} $. (Note the fraction $\frac{\sum_{i=1}^r(a_i+1)}{r+1}$ equals the number $k$ of summands).
\end{thm}
We will provide alternative proofs to Theorem \ref{thm:binforms0} by
using Apolarity, see Theorem \ref{thm:nonabelian_applied2}.
As widely expected, for $n>1$ generic identifiability is
quite a rare phenomenon. They have been extensively
investigated in the XIX$^{\rm th}$ century and at the beginning of the
XX$^{\rm th}$ century and the following are the only discovered cases that we are aware:
\begin{equation}\label{eq:classiclist}
\left\{\begin{array}{ll}
(i) &({\mathrm Sym}^2\C^n)^{\oplus 2}, \textrm{rank\ } n,\textrm{Weierstrass \cite{We}, as in (\ref{eq:2quadrics})},\\
(ii) & {\mathrm Sym}^5\C^3, \textrm{rank }7, \textrm{Hilbert \cite{Hi}, see also
\cite{Ri} and \cite{Pa}},\\
(iii)&{\mathrm Sym}^3\C^4, \textrm{rank } 5,\textrm{Sylvester Pentahedral Theorem \cite{Sy}},\\
(iv)& ({\mathrm Sym}^2\C^3)^{\oplus 4}, \textrm{rank\ } 4,\\
(v)&{\mathrm Sym}^2\C^3\oplus {\mathrm Sym}^3\C^3,\textrm{rank\ }4,\textrm{Roberts \cite{Ro}.}
\end{array}\right.
\end{equation}
The interest in Waring decompositions was revived by
Mukai's work on 3-folds, \cite{Mu}\cite{Mu1}. Since then many authors
devoted their energies to understand, interpret and expand the theory.
Cases $(ii)$ and $(iii)$ in (\ref{eq:classiclist}) were explained by Ranestad and Schreyer in
\cite{RS} by using syzygies, see also \cite{MM} for an approach via
projective geometry and \cite{OO} for a vector bundle approach (called in this paper ``Nonabelian Apolarity'', see \S\ref{sec:Nonabelian}). Case $(v)$ was reviewed in \cite{OS} in
the setting of Lueroth quartics. $(iv)$ is a classical and ``easy'' result, there is a unique Waring decomposition of a
general 4-tuple of ternary quadrics. There is
a very nice geometric interpretation for this latter case. Four points in $\p^5$ define a
$\p^3$ that cuts the Veronese surface in 4 points giving the required
unique decomposition. See Remark \ref{rem:d^n} for a generalization to arbitrary $(d,n)$.
Our main contribution with respect to unique decompositions is the following new case.
\begin{thm}
\label{thm:main334} A general $f\in {\mathrm Sym}^3\C^3\oplus {\mathrm Sym}^3\C^3\oplus
{\mathrm Sym}^4\C^3$ has a unique Waring decomposition of rank 7, namely it is identifiable.
\end{thm}
The Theorem will be proved in the general setting of Theorem \ref{thm:nonabelian_applied2}.
Beside the new example found we think it is important to stress the
way it arised.
We adapted the methods in \cite{HOOS} to our setting, by using the software Bertini \cite{Be} and also the package {\it Numerical Algebraic Geometry} \cite{KL}
in Macaulay2 \cite{M2}, with the generous help by Jon Hauenstein and Anton Leykin, who assisted us in writing our first scripts.
The computational analysis of perfect cases of forms on $\C^3$ suggested that for ${\mathrm Sym}^3\C^3\oplus {\mathrm Sym}^3\C^3\oplus
{\mathrm Sym}^4\C^3$ the
Waring decomposition is unique. Then we proved it via
Nonabelian Apolarity with the choice of a vector bundle.
Another novelty of this paper is a unified proof
of almost all cases with a unique Waring decomposition via Nonabelian
Apolarity with the choice of a vector bundle $E$, see Theorem \ref{thm:nonabelian_applied2}. Finally we borrowed a construction
from \cite{MM} to prove, see Theorem \ref{thm:unirat}, that whenever we have uniqueness for rank $k$
then the variety parametrizing Waring decompositions of higher rank is
unirational.
Pick $r=2$ and $n=2$, the space ${\mathrm Sym}^a\C^3\oplus{\mathrm Sym}^{a+1}\C^3$ is perfect
if and only if $a=2t$ is even. All the numerical computations we did suggested that identifiability holds only for $a=2$ (by Robert's Theorem, see (\ref{eq:classiclist}) $(v)$). Once again this
pushed us to prove the non-uniqueness for these pencils of plane curves.
Our main contribution to Question 2 regards this case and it is the following.
\begin{thm}
\label{th:main_identifi_intro} A general $f\in{\mathrm Sym}^{a}\C^3\oplus{\mathrm Sym}^{a+1}\C^3$ is identifiable
if and only if $a=2$, corresponding to (v) in the list (\ref{eq:classiclist}). Moreover
$f$ has finitely many Waring decompositions if and only if $a=2t$ and in this
case the number of decompositions is at least
$$ \frac{(3t-2)(t-1)}2+1. $$
\end{thm}
We know by equation~(\ref{eq:classiclist})(v) that the bound is sharp for $t=1$ and we verified with high probability, using \cite{Be}, that it
is attained also for $t=2$. On the other hand we do not expect it to be sharp in general.
Theorem \ref{th:main_identifi_intro} is proved in section \S
\ref{sec:ternary}. The main idea, borrowed from \cite{Me}, is to
bound the number of decompositions with the degree of a tangential
projection, see Theorem \ref{th:birational_tangent_proj}. To bound the
latter we use a degeneration argument, see Lemma
\ref{lem:birational_degeneration}, that reduces the computation needed
to an intersection calculation on the plane.
\begin{ack} We thank all the participants of the seminar about Numerical Algebraic Geometry held among Bologna, Ferrara, Firenze and Siena in 2014-15, for fruitful and stimulating discussions. We benefit in particular speaking with A. Bernardi, C. Bocci, A. Calabri, L. Chiantini. We thank J. Hauenstein and A. Leykin for their help with our first numerical computations. All the authors are members of GNSAGA-INDAM.
\end{ack}
\section{The Secant construction}\label{sec:secant}
\subsection{Secant Varieties}
Let us recall, next, the main definitions and results concerning secant varieties.
Let $\Gr_k=\Gr(k,N)$ be the Grassmannian of $k$-linear spaces in $\p^N$.
Let $X\subset\p^{N}$ be an irreducible variety
$$\Gamma_{k+1}(X)\subset X\times\cdots\times X\times\Gr_k,$$
the closure of the graph of
$$\alpha:(X\times\cdots\times X)\setminus\Delta\to \Gr_k,$$
taking $(x_0,\ldots,x_{k})$ to the $[\langle
x_0,\ldots,x_{k}\rangle]$, for $(k+1)$-tuple of distinct points.
Observe that $\Gamma_{k+1}(X)$ is irreducible of dimension $(k+1)n$.
Let $\pi_2:\Gamma_{k+1}(X)\to\Gr_k$ be
the natural projection.
Denote by
$$S_{k+1}(X):=\pi_2(\Gamma_{k+1}(X))\subset\Gr_k.$$
Again $S_{k+1}(X)$ is irreducible of dimension $(k+1)n$.
Finally let
$$I_{k+1}=\{(x,[\Lambda])| x\in \Lambda\}\subset\p^{N}\times\Gr_k,$$
with natural projections $\pi_i$ onto the factors.
Observe that $\pi_2:I_{k+1}\to\Gr_k$ is a $\p^{k}$-bundle on $\Gr_k$.
\begin{defn}\label{def:secant} Let $X\subset\p^{N}$ be an irreducible variety. The {\it abstract $k$-Secant variety} is
$$\sec_k(X):=\pi_2^{-1}(S_k(X))\subset I_k.$$ While the {\it $k$-Secant variety} is
$$\Sec_k(X):=\pi_1(\sec_k(X))\subset\p^N.$$
It is immediate that $\sec_k(X)$ is a $(kn+k-1)$-dimensional variety with a
$\p^{k-1}$-bundle structure on $S_k(X)$. One says that $X$ is
$k$-\emph{defective} if $$\dim\Sec_k(X)<\min\{\dim\sec_k(X),N\}$$ and calls $ k $-\emph{defect} the number $$\delta_{k}=\min\{\dim\sec_k(X),N\}-\dim\Sec_k(X).$$
\end{defn}
\begin{rem} Let us stress that in our definition $\Sec_1(X)=X$. A simple but useful feature of the above definition is the following. Let $\Lambda_1$ and $\Lambda_2$ be two distinct
$k$-secant $(k-1)$-linear space to $X\subset\p^{N}$. Let $\lambda_1$ and $\lambda_2$ be
the corresponding projective $(k-1)$-spaces in $\sec_k(X)$.
Then we have $\lambda_1\cap \lambda_2=\emptyset$.
\label{re:vuoto}
\end{rem}
Here is the main result we use about secant varieties.
\begin{thm}[Terracini Lemma \cite{Te}\cite{ChCi}] \label{th:terracini}
Let $X\subset\p^{N}$ be an irreducible, projective variety.
If $p_1,\ldots, p_{k}\in X$ are general points and $z\in\langle
p_1,\ldots, p_{k}\rangle$
is a general point, then the embedded tangent space at $z$ is
$$\T_z\Sec_k(X) = \langle \T_{p_1}X,\ldots, \T_{p_{k}}X\rangle$$
If $X$ is k-defective, then the general hyperplane $H$ containing
$\T_{z}\Sec(X)$ is tangent to $X$ along a variety
$\Sigma(p_1,\ldots, p_{k})$ of pure, positive dimension, containing
$p_1,\ldots, p_{k}$.
\end{thm}
\subsection{Secants to a projective bundle}\label{subsec:projbundle}
We show a geometric interpretation of the decomposition (\ref{eq:dec}) by considering the $k$-secant variety to the projective bundle (see \cite[II, \S 7]{Har})
$$ X=\mathbb{P}(\mathcal{O}_{\p^n}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\p^n}(a_{r}) ) \subset \mathbb{P}\left(H^0\left(\oplus_i \mathcal{O}_{\p^n}(a_{i})\right)\right) = \mathbb{P}^{N-1}, $$
where $N=\sum_{i=1}^r{{a_i+n}\choose n}$. We denote by $\pi\colon X\to\p^n$ the bundle projection. Note that $\dim X=(r+n-1)$ and the immersion in $\mathbb{P}^{N-1}$ corresponds to the canonical invertible sheaf $\o_X(1)$
constructed on $X$ (\cite[II, \S 7]{Har}).
Indeed $X$ is parametrized by
$\left(\lambda^{(1)}\ell^{a_{1}}, \ldots, \lambda^{(r)}\ell^{a_{r}} \right)\in \oplus_{i=1}^rH^0\left(\mathcal{O}_{\p^n}(a_{i})\right)$, where $\ell\in\C^{n+1}$ and $\lambda^{(i)}$ are scalars. $X$ coincides with polynomial vectors of rank $1$, as defined in the Introduction.
It follows that the $k$-secant variety to $X$ is parametrized by
$\displaystyle{\sum_{i=1}^{k}}\left(\lambda_{i}^{1}\ell_{i}^{a_{1}}, \ldots, \lambda_{i}^{r}\ell_{i}^{a_{r}} \right), $ where $\lambda_i^j$ are scalars and $\ell_i\in\C^{n+1}$. In the case $a_i=i$ for $i=1,\ldots, d$, this construction appears already in
\cite{CQU}. Since $X$ is not contained in a hyperplane, it follows that any
polynomial vector has a Waring decomposition as in (\ref{eq:simwaring}).
Thus, the number of decompositions by means of $ k $ linear forms of $ f_{1}, \ldots, f_{r} $ equates the $k$-\emph{secant degree} of $X$.
If $ a_{i} = a $ for all $ i \in \{1, \ldots, r\} $, then we deal with $ \mathbb{P}^{r-1}\times\p^n $ embedded through the Segre-Veronese map with $ \o(1,a) $, as we can see in Proposition 1.3. of \cite{DF} or in \cite{BBCC}. \\
Moreover, we remark that assuming to be in a perfect case in the sense of Definition \ref{d:perfectcases} is equivalent to the fact that $ \mathbb{P}(\mathcal{O}_{\p^n}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\p^n}(a_{r}) ) $ is a \emph{perfect} variety, i.e. $(n+r)|N$. \\
{Theorem \ref{thm:binforms0} has the following reformulation
(compare with Claim $5.3.$ and Proposition 1.14 of \cite{CR}) :}
\begin{cor}\label{c:identifiability}
If (\ref{eq:perfect}) and $ a_{1}+1 \geq k $ hold, then $ \mathbb{P}(\mathcal{O}_{\p^1}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\p^1}(a_{r}) ) $ is $ k $-identifiable, i.e. its $k$-secant degree is equal to $ 1 $.
\end{cor}
\begin{rem}
A formula for the dimension of the $k$-secant variety of the rational normal scroll $X$ for $n=1$ has been given in \cite[pag. 359]{CaJo} (with a sign mistake,
corrected in \cite[Prop. 1.14]{CR}).
\end{rem}
\begin{rem}\label{rem:d^n}
We may consider the Veronese variety $V:=V_{d,n}\subset\p^{{{d+n}\choose{n}}-1}$. Let $s-1={\rm cod} V$ then $s$ general points determine a unique $\p^s$ that intersects $V$ in $d^n$ points. The $d^n$ points are linearly independent
only if $d^n=s$ that is either $n=1$ or $(d,n)=2$. This shows that a
general vector $f=(f_1,\ldots,f_s)$ of forms of degree $d$ admits
${d^n}\choose{s}$ decompositions, see the table at the end of \S\ref{sec:compapp} for some numerical examples. On the other hand, from a
different perspective, dropping the requirement that the linear forms
giving the decompositions are linearly independent, this shows that
there is a unique set of $d^n$ linear forms that decompose the general
vector $f$. Note that this time only the forms and not the coefficient
are uniquely determined. We will not dwell on this point of view here
and left it for a forthcoming paper.
\end{rem}
\section{Nonabelian Apolarity and Identifiability}
\label{sec:binary}\label{sec:Nonabelian}
Let $f\in Sym^dV$. For any $e\in{\mathbb Z}$, Sylvester constructed the catalecticant map
$C_f\colon {\mathrm Sym}^eV^*\to {\mathrm Sym}^{d-e}V$ which is the contraction by $f$. Its main property is the inequality $\mathrm{rk\ }C_f\le\mathrm{rk\ } f$,
where the rank on left-hand side is the rank of a linear map, while the rank on the right-hand side has been defined in the Introduction.
In particular the $(k+1)$-minors of $C_f$ vanish on
the variety of polynomials with rank bounded by $k$, which is $\Sec_k(V_{d,n})$.
{The catalecticant map behaves well with polynomial vectors.
If $f\in \oplus_{i=1}^r{\mathrm Sym}^{a_i} V $, for any $e\in{\mathbb Z}$ we define the catalecticant map
$C_f\colon {\mathrm Sym}^eV^*\to \oplus_{i=1}^r{\mathrm Sym}^{a_i-e}V$ which is again the contraction by $f$. If $f$ has rank one, this means there exists
$\ell\in V$ and scalars $\lambda^{(i)}$ such that
$f=\left(\lambda^{(1)}\ell^{a_{1}}, \ldots, \lambda^{(r)}\ell^{a_{r}} \right)$ .
It follows that $\mathrm{rk\ }C_f\le 1$, since the image of $C_f$ is generated by
$\left(\lambda^{(1)}\ell^{a_{1}-e}, \ldots, \lambda^{(r)}\ell^{a_{r}-e} \right)$, which is zero
if and only if $a_r<e$. It follows by linearity the basic inequality
$$\mathrm{rk\ }C_f\le\mathrm{rk\ } f.$$ Again the $(k+1)$-minors of $C_f$ vanish on
the variety of polynomial vectors with rank bounded by $k$, which is $\Sec_k(X)$, where $X$ is the projective bundle defined in \S\ref{subsec:projbundle}.
A classical example is the following. Assume $V=\C^3$. London showed in \cite{Lon}(see also \cite{Sco}) that a pencil of ternary cubics
$f=(f_1,f_2)\in {\mathrm Sym}^3V\oplus{\mathrm Sym}^3V$ has border rank $5$
if and only if $\det C_f=0$ where
$C_f\colon {\mathrm Sym}^2V^*\to V\oplus V$ is represented by a $6\times 6$
matrix (see \cite[Remark 4.2]{CaCh} for a modern reference). Indeed $\det C_f$ is the equation
of $\Sec_5(X)$ where $X$ is the Segre-Veronese variety $\left(\p^1\times\p^2,\o_X(1,3)\right)$. Note that $X$ is $5$-defective according to Definition
\ref{def:secant} and this phenomenon is pretty similar to the case of Clebsch quartics recalled in the introduction.
The following result goes back to Sylvester.
\begin{prop}[Classical Apolarity]
Let $f=\sum_{i=1}^kl_i^d\in Sym^dV$, let $Z=\{l_1,\ldots, l_k\}\subset V$.
Let $C_f\colon {\mathrm Sym}^eV^*\to {\mathrm Sym}^{d-e}V$ be the contraction by $f$. Assume the rank of $C_f$ equals $k$.
Then $${\mathrm BaseLocus}\ker \left(C_f\right)\supseteq Z.$$
\end{prop}
\begin{proof} Apolarity Lemma (see \cite{RS}) says that $I_Z\subset f^{\perp}$, which reads in degree $e$ as
$H^0(I_Z(e))\subset \ker C_f$. Look at the subspaces in this inclusion as subspaces of $H^0(\p^n,\o(d))$. The assumption on the rank implies that (compare with the proof of \cite[ Prop. 4.3]{OO})
$${\mathrm codim\ }H^0(I_Z(e))\le k={\mathrm rk\ }C_f = {\mathrm codim\ } \ker C_f,$$
hence we have the equality $H^0(I_Z(e))=\ker C_f$. It follows $${\mathrm BaseLocus}\ker \left(C_f\right)
={\mathrm BaseLocus}H^0(I_Z(e))\supseteq Z.$$\end{proof}
Classical Apolarity is a powerful tool to recover $Z$ from $f$,
hence it is a powerful tool to write down a minimal Waring decomposition of $f$.
The following Proposition \ref{prop:nonabelian} is a further generalization and it reduces to classical apolarity when $(X,L)=(\p V,{\o}(d))$
and $E={\o}(e)$ is a line bundle. The vector bundle $E$ may have larger rank and explains the name of
Nonabelian Apolarity.
We recall that the natural map $H^0(E)\otimes H^0(E^*\otimes L)\to H^0(L)$
induces the linear map $H^0(E)\otimes H^0(L)^*\to H^0(E^*\otimes L)^*$,
then for any $f\in H^0(L)^*$ we have the contraction map
$A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$.
\begin{prop}[Nonabelian Apolarity]\label{prop:nonabelian}\cite[ Prop. 4.3]{OO}
Let $X$ be a variety, $L\in Pic(X)$ a very ample line bundle which gives the embedding
$X\subset \p\left(H^0(X,L)^*\right)=\p W$. Let $E$ be a vector bundle on $X$.
Let $f=\sum_{i=1}^k w_i\in W$ with $z_i=[w_i]\in\p W$, let $Z=\{z_1,\ldots, z_k\}
\subset\p W$.
It is induced $A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$.
Assume that $\mathrm{rk}A_f=k\cdot \mathrm{rk}E$.
Then
${\mathrm {BaseLocus}}\ker \left(A_f\right)\supseteq Z$.
\end{prop}
In all cases we apply the Theorem we will compute separately $\mathrm{rk}A_f$.
Nonabelian Apolarity enhances the power of Classical Apolarity
and may detect a minimal Waring decomposition of a polynomial in some cases when Classical Apolarity fails, see next Proposition \ref{prop:nonabelian_applied}.
Our main examples start with the quotient bundle $Q$ on $\p^n=\p(V)$, it has rank $n$ and it is defined by the Euler exact sequence
$$0\rig{}\o(-1)\rig{}\o\otimes V^*\rig{}Q\rig{}0.$$
Let $L=\o(d)$ and $E=Q(e)$. Any $f\in {\mathrm Sym}^d\C^3$ induces
the contraction map
\begin{equation}\label{eq:contractionq}
A_f\colon H^0(Q(e))\to H^0(Q^*(d-e))^*\simeq H^0(Q(d-e-1))^*.
\end{equation}
The following was the argument used in \cite{OO} to prove cases (ii) and (iii)
of \ref{eq:classiclist}.
\begin{prop}
\label{prop:nonabelian_applied} Let $X$ be a variety, $L\in Pic(X)$ a
very ample line bundle and $E$ a vector bundle on
$X$ with ${\rm rk}E=\dim X$. Let $[f]\in \p(H^0(L)^*)$ be a general point, $k=
\frac{h^0(X,L)}{\dim X+1}$, and
$A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$ the {contraction} map.
Assume that $\mathrm{rk}A_f=r\cdot \mathrm{rk}E$, and
$c_{\rm rkE}(E)=k$. Assume moreover that
for a specific $f$ the base locus of $\ker A_f$ is given by $k$ points.
Then the $k$-secant map
$$\pi_k:\sec_k(X)\to\p(H^0(L)^*)$$
is birational. The assumptions are verified in the following cases, corresponding to (ii) and (iii) of (\ref{eq:classiclist}).
$$\begin{array}{c|c|c|c}
(X,L) &H^0(L)& \textrm{rank\ }&E\\
\hline\\
(\p^2,\o(5))& {\mathrm Sym}^5\C^3& 7&Q_{\p^2}(2)\\
(\p^3,\o(3))&{\mathrm Sym}^3\C^4& 5&Q_{\p^3}^*(2)\\
\end{array}$$
\end{prop}
Specific $f$'s in the statement may be found as random polynomials in \cite{M2}.
In order to prove also cases (iv) and (v) of (\ref{eq:classiclist})
and moreover our Theorem \ref{thm:main334} we need to extend this result
as follows
\begin{thm}
\label{thm:nonabelian_applied2} Let $X\rig{\pi} Y$ be a projective bundle, $L\in Pic(X)$ a
very ample line bundle and $F$ a vector bundle on
$Y$, we denote $E=\pi^*F$. Let $[f]\in \p(H^0(L)^*)$ be a general point, $k=
\frac{h^0(X,L)}{\dim X+1}$, and
$A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$ the {contraction} map.
Let $a=\dim\ker A_f$. Assume that $\mathrm{rk}A_f=k\cdot \mathrm{rk}E$ and that
$(c_{\mathrm{rk }F}F)^{a}=k$. Assume moreover that
for a specific $f$ the base locus of $\ker A_f$ is given by $k$
fibers of $\pi$.
Then the $k$-secant map
$$\pi_k:\sec_k(X)\to\p(H^0(L)^*)$$
is birational. The assumptions are verified in the following cases.
{\footnotesize
$$
\begin{array}{l|l|c|c|l}
(X,L) &H^0(L)& \textrm{rank\ }&F&\dim\ker A_f\\
\hline\\
\left(\p\left(\oplus_{i=1}^r\o_{\p^1}(a_i)\right),\o_X(1)\right)&\oplus_{i=1}^r
{\mathrm Sym}^{a_i}\C^2&
{\tiny k:=\frac{\sum_{i=1}^r a_i+1}{r+1}}
&\o_{\p^1}(k)&1 (\textrm{if\ }k\le a_1+1)\\
\left(\p\left(\o_{\p^2}(2)^4\right),\o_X(1)\right)& ({\mathrm Sym}^2\C^3)^{\oplus 4}&
4&\o_{\p^2}(2)&2\\
\left(\p\left(\o_{\p^2}(2)\oplus\o_{\p^2}(3)\right),\o_X(1)\right)&{\mathrm Sym}^2\C^3\oplus {\mathrm Sym}^3\C^3&4&\o_{\p^2}(2)&2
\\
\left(\p\left(\o_{\p^2}(3)^2\oplus\o_{\p^2}(4)\right),\o_X(1)\right)&\left({\mathrm Sym}^3\C^3\right)^{\oplus{2}}\oplus{\mathrm Sym}^4\C^3&7&Q_{\p^2}(2) &1
\end{array}$$}
\end{thm}
\begin{proof} By Proposition \ref{prop:nonabelian} we have
$Z\subset Baselocus(\ker A_f)$, where the base locus can be found by the common zero locus of some sections $s_1,\ldots, s_a$
of $E$ which span $\ker A_f$.
Since $E=\pi^*F$ and $H^0(X,E)$ is naturally isomorphic to $H^0(Y,F)$, the zero locus of each section of $E$ corresponds to the pullback
through $\pi$ of the zero locus of the corresponding section of $F$.
By the assumption on the top Chern class of $F$ we expect
that the base locus of $\ker A_f$ contains $k=\mathrm{length\ }(Z)$ fibers of the projective bundle $X$. The hypothesis guarantees that this expectation
is realized for a specific polynomial vector $f$. By semicontinuity, it is realized for the generic $f$. This determines the forms $l_i$ in (\ref{eq:simwaring}) for a generic polynomial vector $f$. It follows that $f$ is in the linear span of the fibers $\pi^{-1}(l_i)$ where $Z=\{l_1,\ldots, l_a\}$. Fix representatives for the forms $l_i$ for $i=1,\ldots, k$. Now the scalars $\lambda_i^j$
in (\ref{eq:simwaring})
are found by solving a linear system. Our assumptions imply that $X$ is not $k$-defective, otherwise
the base locus of $\ker A_f$ should be positive dimensional. In particular the tangent spaces at points in $Z$,
which are general, are independent by Terracini Lemma. Since each $\pi$-fiber is contained in the corresponding tangent space,
it follows that the fibers $\pi^{-1}(l_i)$ corresponding to $l_i\in Z$ are independent. It follows that the scalars $\lambda_i^j$
in (\ref{eq:simwaring}) are uniquely determined and we have generic identifiability. The check that the assumptions are verified in the cases listed has been perfomed
with random polynomials with the aid of Macaulay2 package \cite{M2}.
In all these cases, by the projection formula we have the natural isomorphism $H^0(X,E^*\otimes L)\simeq H^0(Y,F\otimes\pi_*L)$.
\end{proof}
Note that the first case in the list of Theorem \ref{thm:nonabelian_applied2} corresponds to Ciliberto-Russo Theorem
\ref{thm:binforms0}, in this case $H^0(E)={\mathrm Sym}^k\C^2$ has dimension $k+1$,
$H^0(E^*\otimes L)={\mathrm Sym}^{a_1-k}\C^2\oplus\ldots\oplus {\mathrm Sym}^{a_r-k}$
has dimension $\sum_{i=1}^r(a_i-k+1)= k$ (if $k\le a_1+1)$ and the contraction map $A_f$ has rank $k$,
with one-dimensional kernel.
The last case in the list of Theorem \ref{thm:nonabelian_applied2} corresponds to Theorem \ref{thm:main334}.
A general vector $f\in ({\mathrm Sym}^3\C^3)^{\oplus 2}\oplus {\mathrm Sym}^4\C^3$
induces the contraction $A_f\colon H^0(Q(2))\to H^0(Q)\oplus H^0(Q)\oplus H^0(Q(1))$
with one-dimensional kernel. Each element in the kernel vanishes on $7$ points which give the seven Waring summands of $f$.
}
Note also that $\left(\p\left(\o_{\p^2}(2)^4\right),\o_X(1)\right)$
coincides with Segre-Veronese variety $(\p^3\times\p^2,\o(1,2))$
\begin{rem}\label{rem:catalecticant} The assumption $ a_{1}+1 \geq k $ in \ref{thm:binforms0}
is equivalent to $\frac{1}{r+1}\sum_{i=1}^r(a_i+1)\le a_1+1$
which means that $a_i$ are ``balanced''.
\end{rem}
We conclude this section showing how the existence of a unique decomposition determines the
birational geometry of the varieties parametrizing higher rank
decompositions.
The following is just a slight generalization of \cite[Theorem 4.4]{MM}
\begin{thm}
\label{thm:unirat} Let $X\subset\p^N$ be such that the $k$-secant
map $\pi_k:\sec_k(X)\to \p^N$ is birational. Assume that $X$ is
unirational then for $p\in \p^N$
general the variety $\pi_h^{-1}(p)$ is unirational for any $h\geq
k$, in particular it is irreducible.
\end{thm}
\begin{proof} Let $p\in \p^N$ be a general point, then for $h>k$ we
have $\dim
\pi_h^{-1}(p)=(h+1)\dim X-1-N=(h-k)(\dim X+1)$. Note that, for $q\in
\p^N$ general, a general
point in $x\in\pi^{-1}_h(q)$ is uniquely associated to a set of $h$
points $\{x_1,\ldots,x_h\}\subset X$ and an $h$-tuple
$(\lambda_1,\ldots,\lambda_h)\in{\C^h}$ with the requirement that
$$q=\sum\lambda_ix_i .$$ Therefore the birationality of $\pi_k$
allows to associate, to a general point in $q\in \p^N$, its unique
decomposition in sum of $k$ factors. That is
$\pi_k^{-1}(q)=(q,[\Lambda_k(q)])$ for a general point $q\in\p^N$.
Via this identification we may define a map
$$\psi_h:(X\times\p^1)^{h-k}\map\pi^{-1}_h(p) $$
given by
\begin{eqnarray*}(x_1,\lambda_1,\ldots,x_{h-k},\lambda_{h-k})\mapsto
(p,[\langle x_1,\ldots,x_{h-k},\Lambda_k(p-\lambda_1x_1-\ldots-\lambda_{h-k}x_{h-k})\rangle]).
\end{eqnarray*}
The map $\psi_h$ is clearly generically finite, of degree
${h}\choose{n+1}$, and dominant. This is sufficient to show the claim.
\end{proof}
Theorem \ref{thm:unirat} applies to all decompositions
that admit a unique form
\begin{cor}
Let $f=(f_1,\ldots,f_r)$ be a vector of general homogeneous
forms. If $f$ has a unique Waring
decomposition of rank $k$. Then the set of decompotions of rank
$h>k$ is parametrized by a unirational variety.
\end{cor}
\begin{rem} Let's go back to our starting example (\ref{eq:2quadrics}) and specialize $f_1=\sum_{i=0}^nx_i^2$ to the euclidean quadratic form.
Then any minimal Waring decomposition of $f_1$ consists of $n+1$ orthogonal summands, with respect to the euclidean form.
It follows that the decomposition (\ref{eq:2quadrics}) is equivalent to the diagonalization of $f_2$ with orthogonal summands.
Over the reals, this is possible for any $f_2$ by the Spectral Theorem.
Also Robert's Theorem, see (v) of (\ref{eq:classiclist}), has a similar interpretation. If $f_1=x_0^2+x_1^2+x_2^2$ and $f_2\in{\mathrm Sym}^3\C^3$ is general, the unique
Waring decomposition of the pair $(f_1,f_2)$ consists in chosing four representatives of lines $\{l_1,\ldots, l_4\}$
and scalars $\lambda_1,\ldots, \lambda_4$ such that \begin{equation}\label{eq:robertortho}
\left\{\begin{array}{rcl}f_1&=&\sum_{i=1}^4l_i^2\\
\hspace{0.2cm} \\
f_2&=&\sum_{i=1}^4\lambda_il_i^3\end{array}\right.
\end{equation} Denote by $L$ the $3\times 4$ matrix whose $i$-th column is given by the coefficients of $l_i$.
Then the first condition in (\ref{eq:robertortho}) is equivalent to the equation
\begin{equation}\label{eq:llt}LL^t=I.\end{equation}
This equation generalizes orthonormal basis and the column of $L$ makes a {\it Parseval} frame, according to
\cite{CMS} \S 2.1. So Robert's Theorem states that the general ternary cubic has a unique decomposition
consisting of a Parseval frame.
In general a Parseval frame for a field $F$ is given by $\{l_1,\ldots, l_n\}\subset F^d$ such that
the corresponding $d\times n$ matrix $L$ satisfies the condition $LL^t=I$. This is equivalent to the equation
$\sum_{i=1}^n(\sum_{j=1}^d l_{ji}x_j)^2=\sum_{i=1}^dx_i^2$, so again to a Waring decomposition with $n$ summands
of the euclidean form in $F^d$.
This makes a connection of our paper with \cite{ORS}, which studies frames in the setting of secant varieties and tensor decomposition.
For example equation (7) in \cite{ORS} define a solution to (\ref{eq:llt}) with the additional condition that the four columns have unit norm.
Note that equation (8) in \cite{ORS} define a Waring decomposition of the pair $(f_1, T)$. Unfortunately the additional condition about unitary norm
does not allow to transfer directly the results of \cite{ORS} to our setting, but we believe this connection deserves to be pushed further.
It is interesting to notice that the decompositions of moments $M_2$ and $M_3$ in \cite[\S 3]{AGHKT} is a (simultaneous) Waring decompositions of the quadric $M_2$ and the cubic $M_3$.
\end{rem}
\section{Computational approach}\label{sec:compapp}
In this section we describe how we can face Question 1 and Question 2, introduced in \S\ref{sec:intr}, from the computational analysis point of view.\\
\indent With the aid of Bertini \cite{Be}, \cite{BHSW} and Macaulay2 \cite{M2} software systems, we can construct algorithms, based on homotopy continuation techniques and monodromy loops, that, in the spirit of \cite{HOOS}, yield the number of Waring decompositions of a generic polynomial vector $ f = (f_{1}, \ldots, f_{r}) \in {\mathrm Sym}^{a_1} \C^{n+1} \oplus \ldots \oplus {\mathrm Sym}^{a_r} \C^{n+1} $ with high probability. Precisely, given $ n,r,a_{1}, \ldots, a_{r}, k \in \mathbb{N} $ satisfying (\ref{eq:perfect}) and coordinates $ x_{0}, \ldots, x_{n} $, we focus on the polynomial system
\begin{equation}\label{eq:polsys}
\left\{
\begin{array}{l}
f_{1} = \lambda_{1}^{1}\ell_{1}^{a_{1}}+ \ldots + \lambda_{k}^{1}\ell_{k}^{a_{1}} \\
\quad \, \, \, \vdots \\
f_{r} = \lambda_{1}^{r}\ell_{1}^{a_{r}}+ \ldots + \lambda_{k}^{r}\ell_{k}^{a_{r}}\\
\end{array}
\right.
\end{equation}
where $ f_{j} \in {\mathrm Sym}^{a_j} \C^{n+1} $ is a fixed general element, while $ \ell_{i} = x_{0}+ \sum_{h=1}^{n}l_{h}^{i}x_{h} \in \mathbb{P}(\C^{\vee}) $ and $ \lambda_{i}^{j} \in \C $ are unknown. By expanding the expressions on the right hand side of (\ref{eq:polsys}) and by applying the identity principle for polynomials, the $j$-th equation of (\ref{eq:polsys}) splits in $ {{a_{j}+n} \choose {n}} $ conditions. Our aim is to compute the number of solutions of $ F_{(f_{1}, \ldots, f_{r})}([l_{1}^{1}, \ldots, l_{n}^{1}, \lambda_{1}^{1}, \ldots, \lambda_{1}^{r}], \ldots \ldots, [l_{1}^{k}, \ldots, l_{n}^{k}, \lambda_{k}^{1}, \ldots, \lambda_{k}^{r}]) $, the square non linear system of order $ k(r+n)$, arising from the equivalent version of
(\ref{eq:polsys}) in which in each equation all the terms are on one side of the equal sign. In practice, to work with general $ f_{j} $'s, we assign random complex values $ \overline{l}_{h}^{i} $, $ \overline{\lambda}_{i}^{j} $ to $ l_{h}^{i} $, $ \lambda_{i}^{j} $ and, by means of $ F_{(f_{1}, \ldots, f_{r})} $, we compute the corresponding $ \overline{f}_{1}, \ldots, \overline{f}_{r} $, the coefficients of which are so called \emph{start parameters}. In this way, we know a solution $ ([\overline{l}_{1}^{1}, \ldots,\overline{l}_{n}^{1}, \overline{\lambda}_{1}^{1}, \ldots, \overline{\lambda}_{1}^{r}], \ldots \ldots, [\overline{\lambda}_{1}^{k}, \ldots,\overline{l}_{n}^{k}, \overline{\lambda}_{k}^{1}, \ldots, \overline{\lambda}_{k}^{r}]) \in \C^{k(r+n)}$ of $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $, i.e. a Waring decomposition of $ \overline{f} = (\overline{f}_{1}, \ldots, \overline{f}_{r}) $, which is called a \emph{startpoint}. Then we consider $ F_{1} $ and $ F_{2} $, two square polynomial systems of order $ k(n+r) $ obtained from $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $ by replacing the constant terms with random complex values. We therefore construct 3 segment homotopies
$$ H_{i} : \C^{k(r+n)} \times [0,1] \to \C^{k(r+n)} $$
for $ i \in \{0,1,2\} $: $H_{0}$ between $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $ and $ F_{1} $, $ H_{1} $ between $ F_{1} $ and $ F_{2} $, $ H_{2} $ between $ F_{2} $ and $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $. Through $ H_{0} $, we get a \emph{path} connecting the startpoint to a solution of $ F_{1} $, called \emph{endpoint}, which therefore becomes a startpoint for the second step given by $ H_{1} $, and so on. At the end of this loop, we check if the output is a Waring decomposition of $ \overline{f} $ different from the starting one. If this is not the case, this procedure suggests that the case under investigation is identifiable, otherwise we iterate this technique with these two \emph{startingpoints}, and so on. If at certain point, the number of solutions of $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $ stabilizes, then, with high probability, we know the number of Waring decompositions of a generic polynomial vector in $ {\mathrm Sym}^{a_1} \C^{n+1} \oplus \ldots \oplus {\mathrm Sym}^{a_r} \C^{n+1} $. \\
We have implemented the homotopy continuation technique both in the software Bertini\cite{Be}, opportunely coordinated with Matlab, and in the software Macaulay2, with the aid of the package \emph{Numerical Algebraic Geometry}\cite{KL}. \\
\indent Before starting with this computational analysis, we need to check that the variety $ \mathbb{P}(\mathcal{O}_{\mathbb{P}^{n}}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\mathbb{P}^{n}}(a_{r})) $, introduced in \S\ref{sec:secant}, is not $ k $-defective, in which case (\ref{eq:polsys}) has no solutions. In order to do that, by using Macaulay2, we can construct a probabilistic algorithm based on Theorem \ref{th:terracini}, that computes the dimension of the span of the affine tangent spaces to $ \mathbb{P}(\mathcal{O}_{\mathbb{P}^{n}}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\mathbb{P}^{n}}(a_{r})) $ at $ k $ random points and then we can apply semicontinuity properties. \\
\indent In the following table we summarize the results we are able to obtain combining numerical and theoretical approaches. Our technique is as follows. We first apply the probabilistic algorithm, checking $ k $-defectivity, described above. If this suggests positive $k$-defect $ \delta_{k} $, we do not pursue the computational approach. When $ \delta_{k} $ is zero, we apply homotopy continuation technique. If the number of decompositions (up to order of summands) stabilizes to a number, $ \symbol{35}_{k} $, we indicate it. If homotopy technique does not stabilize to a fixed number, we apply degeneration techniques like in \S\ref{sec:ternary} to get a lower bound. If everything fails, we put a question mark. Bold degrees are the one obtained via theoretical arguments.\\
\begin{longtable}{|c|c|l|c|c|c|}
\hline \multicolumn{1}{|c|}{$r$} & \multicolumn{1}{|c|}{$n$}& \multicolumn{1}{|l|}{ $(a_{1},\ldots,a_{r})$}& \multicolumn{1}{|c|}{$k$}& \multicolumn{1}{|c|}{$\delta_k$}& \multicolumn{1}{|c|}{$\symbol{35}_{k}$}\\
\hline
\endhead
\hline
\endfoot
$2$ &$2$ &$(4,5)$ & $ 9 $ &0 &$ 3 $ \\
$2$ &$2$ &$(6,6)$ & $ 14 $ &0 & $ \geq 2 $ \\
$2$ &$2$ &$(6,7)$ & $ 16 $ &0& $ \geq 8 $\\
$2$ &$3$ &$(2,4)$ & $ 9$ &$2$& \\
$3$ &$2$ &$(2,2,6)$ & $ 8 $ & $4$& \\
$3$ &$2$ &$(3,3,4)$ & $ 7 $ & 0&${\bf 1} $ \\
$3$ &$2$ & $(3,4,4)$ & $ 8 $ & 0&$ 4 $ \\
$3$ &$2$ & $(5,5,6)$ & $ 14 $ & 0&$ 205$ \\
$3$ &$3$ & $(3,3,3)$ & $ 10 $ & 0&$ 56 $ \\
$4$ &$2$ & $(2,2,4,4)$ & $ 7 $ & $2$& \\
$4$ &$2$ & $(2,3,3,3)$ & $ 6 $ & 0&$ 2$ \\
$4$ &$2$ & $(4,\ldots,4)$ & $ 10 $ &0 & $ ?$ \\
$5$ &$2$ & $(5,\ldots,5,6)$ & $ 16 $ &0 & $ ? $\\
$6$ &$2$ & $(2,\ldots,2,3)$& $ 5 $ & $3$ & \\
$6$ &$4$ & $(2.\ldots,2)$ & $ 9 $ & 0&$ 45 $ \\
$7$&$3$&$(2,\ldots,2)$&$7$&0&$\bf 8$\\
$8$&$2$&$(3,\ldots,3)$&$8$&$0$&$\bf 9$\\
$8$&$2$&$(2,\ldots,2,6)$&$7$&$ 7$& \\
$11$&$4$&$(2,\ldots,2)$&$11$&$0$& ${\bf 4368}$\\
$13$&$2$&$(4,\ldots,4)$&$13$&$0$& ${\bf 560}$\\
$15$&$2$&$(4,\ldots,4,6)$&$14$& $6$& \\
$17$&$3$&$(3,\ldots,3)$&$17$& $0$& $ {\bf 8436285}$\\
$19$&$2$&$(5,\ldots,5)$&$19$& $0$& ${\bf 177100}$\\
$26$&$2$&$(6,\ldots,6)$&$26$& $0$& ${\bf 254186856}$ \\
\end{longtable}
\section{Identifiability of pairs of ternary forms}\label{sec:ternary}
In this section we aim to study the identifiability of pairs of ternary forms.
In particular we study the special case of two forms of degree $a$ and $a+1$.
Our main result is the following
\begin{thm}
\label{th:main_identifi} Let $a$ be an integer then a general pair of ternary forms of degree $a$ and $a+1$ is identifiable if and only if $a=2$. Moreover there are finitely many decompositions if and only if $a=2t$ is even, and for such an $a$ the number of decompositions is at least
$$ \frac{(3t-2)(t-1)}2+1. $$
\end{thm}
The Theorem has two directions on one hand we need to prove that $a=2$ is identifiable, on the other we need to show that for $a>2$ a general pair is never identifiable. The former is a classical result we already recalled in (iii) of (\ref{eq:classiclist}) and
in Theorem \ref{thm:nonabelian_applied2}. For the latter observe that $\dim\sec_k(X)=4k-1$, therefore if either $4k-1<N$ or $4k-1>N$ the general pair is never identifiable. We are left to consider the perfect case $N=4k-1$.
Under this assumption we may assume that $X$ is not $k$-defective, we
will prove that this is always the case in
Remark~\ref{rem:non_defective}, otherwise the non identifiability is immediate. Hence the core of the question is to study generically finite maps
$$\pi_k:\sec_k(X)\to\p^N,$$
with $4k=(a+2)^2$. This yields our last numerical constraint that is
$a=2t$ needs to be even.
The first step is borrowed from, \cite{Me}\cite{Me1}, and it is
slight generalization of \cite[Theorem 2.1]{Me}, see also \cite{CR}.
\begin{thm}\label{th:birational_tangent_proj} Let $X\subset\p^N$ be an irreducible variety of dimension $n$.
Assume that the natural map $\sigma:\sec_{k}(X)\to\p^N$ is dominant and generically finite of degree $d$. Let $z\in\Sec_{k-1}(X)$ be a general point.
Consider $\f:\p^N\dasharrow\p^n$ the projection from the embedded tangent
space $\T_z\Sec_{k-1}(X)$. Then
$\f_{|X}:X\dasharrow\p^n$ is dominant and generically finite of degree at most $d$.
\label{th:proj}
\end{thm}
\begin{proof}
Choose a general point $z$ on a general $(k-1)$-secant linear space spanned by $\langle
p_1,\ldots,p_{k-1}\rangle$.
Let $f:Y\to \p^N$ be the blow up of $\sec_{k-1}(X)$ with exceptional
divisor $E$, and fiber $F_z=f^{-1}(z)$.
Let $y\in F_z$ be a general point. This point uniquely determines a
linear space $\Pi$ of dimension $(k-1)(n+1)$ that
contains $\T_z\sec_{k-1}(X)$.
Then the projection $\f_{|X}:X\dasharrow \p^n$ is generically finite of degree $d$ if and only if
$(\Pi\setminus \T_z\sec_{k-1}(X))\cap X$ consists
of just $d$ points.
Assume that $\{x_1,\ldots, x_a\}\subset (\Pi\setminus
\T_z\sec_{k-1}(X))\cap X$.
By Terracini Lemma, Theorem \ref{th:terracini},
$$\T_z\sec_{k-1}(X)= \langle \T_{p_1}X,\ldots, \T_{p_{k-1}}X\rangle$$
Consider the linear spaces $\Lambda_i=\langle x_i, p_1,\ldots,p_{k-1}\rangle$. The Trisecant
Lemma, see for instance \cite[Proposition 2.6]{ChCi}, yields
$\Lambda_i\neq\Lambda_j$, for $i\neq j$. Let
$\Lambda_i^Y$, and $\Pi^Y$ be the strict transforms on $Y$.
Since $z\in\langle p_1,\ldots,p_{k-1}\rangle$ and $y=\Pi^Y\cap F_z$ then $\Lambda_i^Y$ contains the point $y\in F_z$. In particular we have
$$\Lambda_i^Y\cap\Lambda_j^Y\neq\emptyset$$
Let $\pi_1:\Sec_k(X)\to\p^N$ be the morphism from the abstract secant variety,
and $\mu:U\to Y$ the induced morphism. That is
$U=\Sec_k(X)\times_{\p^N} Y$. Then
there exists a commutative diagram
$$\diagram
U\dto_{p}\rto^{\mu}&Y\dto^{f}\\
\Sec_k(X)\rto^{\pi_1}&\p^N\enddiagram$$
Let $\lambda_i$ and $\Lambda^U_i$ be the strict transform of $\Lambda_i$ in $\Sec_k(X)$ and $U$
respectively. By Remark \ref{re:vuoto}
$\lambda_i\cap \lambda_j=\emptyset$, so that
$$\Lambda^U_i\cap \Lambda^U_j=\emptyset.$$
This proves that $\sharp{\mu^{-1}(y)}\geq a$. But $y$ is a general point of a divisor in the normal variety $Y$. Therefore $\deg\mu$, and henceforth $\deg\pi_1$, is at least $a$.
\end{proof}
To apply Theorem~\ref{th:birational_tangent_proj} we need to better understand $X$ and its tangential projections.
By definition we have
$$X\simeq\p((\o_{\p^2}(-1)\oplus\o_{\p^2})\otimes\pi^*\o_{\p^2}(a+1))$$
then $X\subset\p^N$ can be seen as the embedding of $\p^3$ blown up in one point $q$ embedded by monoids of degree $a+1$ with vertex $p$. That is let $\L=|\I_{q^a}(a+1)|\subset|\o_{\p^3}(a+1)|$, and $Y=Bl_q\p^3$ then
$$X=\f_\L(Y)\subset\p^N.$$
It is now easy, via Terracini Lemma, to realize that the restriction
of the tangential projection $\f_{|X}X\dasharrow\p^3$ is given by the linear system
$$\sH=|\I_{q^a\cup p_1^2\ldots\cup p_{k-1}^2}(a+1)|\subset|\o_{\p^3}(a+1)|.$$
We already assumed that $X$ is not $k$-defective that is we work under the condition
\noindent$(\dag)$\hspace{5cm} $\dim\sH=3.$
\begin{rem}
It is interesting to note that for $a=2$ the map $\f_{|X}$ is the
standard Cremona transformation of $\p^3$ given by
$(x_0,\ldots,x_3)\mapsto (1/x_0,\ldots,1/x_3)$.
\end{rem}
Let us work out a preliminary Lemma, that we reproduce by the lack of an adequate reference.
\begin{lem}
\label{lem:birational_degeneration} Let $\Delta$ be a complex disk around
the origin, $X$ a variety and $\o_X(1)$ a base point free line bundle.
Consider the product $V=X\times \Delta$,
with the natural projections, $\pi_1$ and $\pi_2$. Let $V_t=X\times\{t\}$ and
$\o_V(d)=\pi_1^*(\o_{X}(d))$.
Fix a configuration $p_1,\ldots,p_l$ of $l$ points on $V_0$ and let
$\sigma_i:\Delta\to V$ be sections such that $\sigma_i(0)=p_i$ and
$\{\sigma_i(t)\}_{i=1,\ldots,l}$ are general points of $V_t$ for
$t\neq 0$. Let $P=\cup_{i=1}^l\sigma_i(\Delta)$, and $P_t=P\cap V_t$.
Consider the linear system $\sH=|\o_{V}(d)\otimes\I_{P^2}|$ on $V$, with $\sH_t:=\sH_{|V_t}$.
Assume that $\dim\sH_0=\dim\sH_t=\dim X$, for $t\in\Delta$. Let
$d(t)$ be the degree of the map induced by $\sH_t$. Then $d(0)\leq d(t)$.
\end{lem}
\begin{proof}
If, for $t\neq 0$, $\f_{\sH_t}$ is not dominant the claim is clear. Assume that $\f_{\sH_t}$ is dominant for $t\neq 0$. Then $\f_{\sH_t}$ is generically finite and $\deg\f_{\sH_t}(X)=1$, for $t\neq 0$.
Let $\mu:Z\times\Delta\to V$ be a resolution of the base locus, $V_{Zt}=\mu^*V_t$, and $\sH_{Z}=\mu^{-1}_*\sH$ the strict transform linear systems on $Z$.
Then $V_{Zt}$ is a blow up of $V_t=X$, for $t\neq 0$, and $V_{Z0}=\mu^{-1}_*V_{0}+R$, for some effective, eventually trivial, residual divisor $R$.
By hypothesis $\sH_0$ is the flat limit of $\sH_t$, for $t\neq 0$.
Hence flatness forces
$$d(t)=\sH_Z^{\dim X}\cdot V_{Zt}=\sH_Z^{\dim X}\cdot(\mu^{-1}_*V_{0}+R)\geq \sH_Z^{\dim X}\cdot \mu^{-1}_*V_{0}=d(0).$$
\end{proof}
Lemma~\ref{lem:birational_degeneration} allows us to work on a degenerate configution to study the degree of the map induced by $|\I_{q^a\cup p_1^2\ldots\cup p_{k-1}^2}(a+1)|\subset|\o_{\p^3}(a+1)|\subset|\o_\p^3(a+1)|$.
\begin{lem}
\label{lem:degeneration_ok} Let $H\subset \p^3\setminus\{q\}$ be a plane, $B:=\{p_1,\ldots,p_b\}\subset H$ a set of $b:=1/2t(t+3)$ general points, and $C:=\{x_1,\ldots,x_c\}\subset\p^3\setminus\{q\cup H\}$ a set of
$c:=1/2t(t+1)$ general points. Let $a=2t$ and
$$\sH:=|\I_{q^a\cup C^2\cup B^2}(a+1)|\subset|\o_{\p^3}(a+1)|,$$
be the linear system of monoids with vertex $q$ and double points along $B\cup C$, and $\f_\sH$ the associated map.
Then $\dim \sH=3$ and
$$\deg\f_{\sH}> \frac{(3t-2)(t-1)}2.$$
\end{lem}
\begin{proof} Note that by construction the lines $\Span{q,p_i}$ and $\Span{q,x_i}$ are contained in the base locus of $\sH$.
Let us start computing $\dim\sH$. First we prove that there is a unique element in $\sH$ containing the plane $H$.
\begin{claim}\label{cl:1}
$|\sH-H|=0$.
\end{claim}
\begin{proof}[Proof of the Claim]
Let $D\in\sH$ be such that $D=H+R$ for a residual divisor in $|\o(a)|$. Then $R$ is a cone with vertex $q$ over a plane curve $\Gamma\subset H$. Moreover $R$ is singular along $C$ and has to contain $B$. This forces $\Gamma$ to contain $B$ and to be singular at $\Span{q,x_1}\cap H$. In other words $\Gamma$ is a plane curve of degree $2t$ with $c= 1/2t(t+1)$ general double points and passing thorugh $b= 1/2t(t+3)$ general points. Note that
$$\binom{2t+2}2-3c-b=1.$$
It is well known, see for instance \cite{AH}, that the $c$ points impose independent conditions on plane curves of degree $2t$. Clearly the latter $b$ simple points do the same therefore there is a unique plane curve $\Gamma$ satisfying the requirements. This shows that $R$ is unique and in conclusion the claim is proved.
\end{proof}
We are ready to compute the dimension of $\sH$
\begin{claim}\label{cl:2}
$\dim\sH=3$
\end{claim}
\begin{proof}[Proof of the Claim]
The expected dimension of $\sH$ is 3. Then by Claim~\ref{cl:1} it is enough to show that $\dim\sH_{|H}=2$. To do this observe that
$\sH_{|H}$ is a linear system of plane curves of degree $2t+1$ with $b$ general double points and $c$ simple general points. As in the proof of Claim~\ref{cl:1} we compute the expected dimension
$$\binom{2t+3}2-3b-c=3,$$
and conclude by \cite{AH}.
\end{proof}
Next we want to determine the base locus scheme of $\sH_{|H}$.
Let $\epsilon:S\to H$ be the blow up of $B$ and $\Span{q,x_i}\cap H$, with $\sH_S$ strict transform linear system.
We will first prove the following.
\begin{claim}
The scheme base locus of $|\I_{B^2}(2t+1)|\subset|\o_{\p^2}(2t+1)|$ is $B^2$.
\end{claim}
\begin{proof}
Let $\sL_{ij}:=|\I_{B\setminus\{p_i,p_j\}}(t)|\subset|\o_{\p^2}(t)|$, then
$$\dim \sL_{ij}=\binom{t+2}2-b-2-1=2.$$
By the Trisecant Lemma, see for instance \cite[Proposition 2.6]{ChCi}, we conclude that
$$\Bs\sL_{ij}=B\setminus\{p_i,p_j\}.$$
Let $\Gamma_i, \Gamma_j\in\sL_{ij}$ be such that $\Gamma_i\ni p_i$ and $\Gamma_j\ni p_j$. Then by construction we have
$$D_{ij}:=\Gamma_i+\Gamma_j+\Span{p_i,p_j}\in\sH.$$
Let $D_{ijS}$, $\sL_{ijS}$ be the strict transforms on $S$.
Note that $\Gamma_h$ belongs to a pencil of curves in $\sL_{hk}$ for any $k$. These pencils do not have common base locus outside of $B$ since $\sL_{ijS}$ is base point free and $\dim \sL_{ij}=2$. Therefore the $D_{ijS}$ have no common base locus.
\end{proof}
\begin{claim}
$\sH_{S}$ is base point free.
\end{claim}
\begin{proof} To prove the Claim it is enough to prove that the simple base points associated to $C$ impose independent conditions. Since $C\subset\p^3$ is general this is again implied by the Trisecant Lemma.
\end{proof}
Then we have
$$\deg\f_{\sH_{S}}=\sH_{S}^2=(2t+1)^2-4b-c=\frac{(3t-2)(t-1)}2.$$
To conclude observe that, with the same argument of the claims, we can prove that $\f_{\sH|R}$ is generically finite, therefore
$$\deg\f_{\sH}>\deg\f_{\sH|H}=\deg\f_{\sH_{S}}=(2t+1)^2-4b-c=\frac{(3t-2)(t-1)}2 $$
\end{proof}
\begin{rem}\label{rem:non_defective}Lemma~\ref{lem:degeneration_ok}
proves that $\deg\f_\sH$ is finite. Hence as a byproduct we get that condition $(\dag)$ is always satisfied in our range. That is $X$ is not $k$-defective for $a=2t$.
\end{rem}
\begin{proof}[Proof of Theorem~\ref{th:main_identifi}] We already know
that the number of decomposition is finite only if $a=2t$. By
Remark~\ref{rem:non_defective} we conclude that the number is finite
when $a=2t$. Let $d$ be the number of decompositions for a general
pair. Then by Theorem~\ref{th:birational_tangent_proj} we know that
$d\geq \deg\f$ where $\f:X\dasharrow \p^3$ is the tangential
projection. The required bound is obtained combining Lemma~\ref{lem:birational_degeneration} and Lemma~\ref{lem:degeneration_ok}.
\end{proof} | 0.00693 |
Location & January 18)
CRLS Early Registration for Fall 2016
Now accepting applications for incoming CRLS students starting in Fall 2016. Learn more >>
Registration OPEN: All Grades
Transfers & Waitlists | 0.123225 |
Double-decker aeroplane seats could make flying economy a whole new level of hell
Those on the top row won't be able to stand up or get out
Flying economy can be rough at the best of times, and is often a necessary chore in order to get to that exciting holiday destination.
But it turns out it could be worse. Much worse.
A prototype at this year's Aircraft Interiors Expo in Hamburg, Germany, has showcased how double-decker-style aeroplane seats could be introduced, cramming even more people on board.
The horrific concept would see overhead luggage storage scrapped in favour of an elevated area between each row.
This would leave those on the top row with just 4.92 feet of space between the seats and the top of the plane, meaning they wouldn't be able to stand up or get out.
And it doesn't get much better for those on the lower tier, who would be strapping themselves in for a claustrophobia-inducing journey.
The concept was designed by Alejandro Núñez Vicente, who argues that the design wouldn't be that much different to current economy as travelers already have to crouch with the current seat design.
CNN reports that the design earned Núñez Vicente a nomination at the 2021 Crystal Cabin Awards, and that he has since paused his master's degree to pursue the project.
Núñez Vicente is apparently in talks with a number of big-name airlines and seat manufacturers and has been granted investment for the project.
Unsurprisingly, the hellish concept hasn't been met with a great reception online.
I’ll be back to comment on this once my claustrophobia lets me breathe again.
— 🌻 Kaz Weida 🌻 (@kazweida) June 14, 2022
Fresh hell just dropped
— Amber Sparks (@ambernoelle) June 15, 2022
"double-decker seating planes"
as if being on a plane isn't miserable enough already pic.twitter.com/3EWr0YiDJX
— DnD Sesame Street (@DnDSesame) June 14, 2022
The design was met with a similar reaction on Reddit, with one user writing: "I would spend the entire flight with the unshakeable image of being crushed by a fallen row of seats."
Another added: "New claustrophobia scenario unlocked."
And a third worried about what the consequences on the bottom tier would be if the person directly above you farted.
But Núñez Vicente insists he sees it as the "future of economy class", adding: "My purpose here is to change the economy class seats for the better of humanity, or for all the people that cannot afford to pay for more expensive tickets."
If this ever gets introduced, the staycation could suddenly become a lot more appealing.
Related links:
- Disabled man dies at Gatwick airport after ‘falling down escalator’ when assistance did not arrive
- Bizarre scenes as Turkish flight did not take off after passengers received spooky plane crash pics
- Hero passenger who landed plane with no training seen for first time | 0.148192 |
\begin{document}
\begin{center}
{\bf ON EIGENFUNCTIONS OF THE KERNEL $\frac{1}{2} + \lfloor \frac{1}{xy}\rfloor - \frac{1}{xy}$}
\end{center}
\medskip
\begin{center}
{\sc N. Watt}
\end{center}
\vskip 15mm
\noindent{\bf Abstract.} \
The integral kernel $K(x,y) := \frac{1}{2} + \lfloor \frac{1}{x y}\rfloor - \frac{1}{x y}\,$
($0<x,y\leq 1$) has connections with the Riemann zeta-function
and a (recently observed) connection with the Mertens function.
In this paper we begin a general study of the eigenfunctions of $K$.
Our proofs utilise some classical real analysis (including Lebesgue's theory of integration)
and elements of the established theory of square integrable symmetric integral kernels.
\smallskip
\noindent{\bf Keywords:} \
symmetric kernel, eigenfunction, Hankel operator,
iterated kernel, periodic Bernouilli function,
Hilbert-Schmidt theorem, Riemann zeta-function, Mertens function.
\section{Introduction}
This paper reports the results of research into the properties of the eigenfunctions of the integral kernel
$K : [0,1]\times[0,1] \rightarrow {\mathbb R}$ defined by:
\begin{equation}\label{DefK}
K(x,y)=
\begin{cases}
\frac{1}{2} -
\left\{
(xy)^{-1}
\right\}
&\text{if $0<x,y\leq 1$}, \\
0 &\text{if $0\leq x,y\leq 1$ and $xy=0$} ,
\end{cases}
\end{equation}
where
$\{ \alpha\} := \alpha -\lfloor \alpha\rfloor
=\alpha - \max\{ n\in{\mathbb Z} : n\leq \alpha\}$.
Our interest in this kernel stems from a connection with Mertens sums $\sum_{n\leq x} \mu(n)$,
in which $x\geq 1$ and $\mu(n)$ is the M\"obius function. This connection,
which has its origins in a formula discovered by Mertens himself \cite[Section~3]{Mertens 1897},
is not, however, something that shall concern us in this present paper,
as we have nothing to add to what has already been written about
it in \cite{Huxley and Watt 2018}, \cite{Watt preprint-a} and \cite{Watt preprint-b}.
\par
The kernel $K$ is clearly real and symmetric
(i.e. one has
$K(x,y)=K(y,x)\in{\mathbb R}$, for $0\leq x,y\leq 1$).
It is also a (Lebesgue) measurable function on $[0,1]\times[0,1]$, with Hilbert-Schmidt norm
\begin{equation}\label{HSnorm}
\left\| K\right\|_{\rm HS}
:= \left( \int_0^1 \int_0^1 K^2(x,y) dx dy \right)^{1/2} < {\textstyle\frac{1}{2}} ,
\end{equation}
and satisfies
\begin{equation}\label{HalfHilbertSchmidt}
\min\left\{ 0 , \left( \textstyle{-\frac{1}{2}}\right)^p\right\}
\leq\int_0^1 K^p (x,y) dy < \left( \textstyle{\frac{1}{2}}\right)^p
\quad\text{($p\in\{ 1 , 2\}$, $0\leq x\leq 1$).}
\end{equation}
Note that \eqref{HalfHilbertSchmidt} contains an implicit assertion
to the effect that, for any constant $a\in [0,1]$,
the corresponding function $y\mapsto K(a,y)$
(and so also the function $y\mapsto K(y,a)$) is measurable on $[0,1]$.
\par
In addition to the above mentioned properties,
$K$ has the property of being {\it non-null} (i.e. one has $\| K\|_{\rm HS} >0$).
Partly in consequence of this, there exists a maximal orthonormal system
$\left\{\phi_1,\phi_2, \ldots\right\}\subset L^2 \bigl( [0,1]\bigr)$ such that
\begin{equation}\label{DefEigenfunction}
\phi_j(x) = \lambda_j \int_0^1 K(x,y) \phi_j(y) dy
\quad\text{($0\leq x\leq 1$, $j\in{\mathbb N}$),}
\end{equation}
where $\lambda_1,\lambda_2,\ldots $ are certain non-zero real constants:
for proof, see the discussion of \cite[Section~3.8]{Tricomi 1957} and our remarks
at the end of this paragraph, and after the next paragraph.
Following \cite{Tricomi 1957}, we say that the numbers
$\lambda_1,\lambda_2,\ldots $ are the {\it eigenvalues} of $K$: the associated {\it eigenfunctions} are
$\phi_1(x),\phi_2(x),\ldots $, respectively.
In \cite{Watt preprint-a}, we have shown that $K$ has infinitely many distinct positive eigenvalues and
infinitely many distinct negative eigenvalues.
\par
Note that $L^2\bigl( [0,1]\bigr)$ denotes here the space of
(Lebesgue) measurable functions $f : [0,1] \rightarrow {\mathbb R}$ that are {\it square-integrable}
(in that $f^2$ is Lebesgue integrable on $[0,1]$),
and that what is meant (above) by
{\it orthonormality} is orthonormality with respect
to the (semi-definite) inner product
\begin{equation}
\label{DefInnerProduct}
\left\langle f , g\right\rangle
:= \int_0^1 f(x) g(y) dx
\quad\text{($f,g\in L^2\bigl( [0,1]\bigr)$).}
\end{equation}
Each $f\in L^2 \left( [0,1]\right)$ has
{\it norm} $\| f\| :=\sqrt{\langle f , f\rangle}$.
This `norm' is actually only a seminorm on $L^2 \left( [0,1]\right)$, since the condition
$\| f\| =0$ implies only that $f(x)=0$
{\it almost everywhere} (with respect to the Lebesgue measure).
\par
In the theory developed in \cite{Tricomi 1957} it is implicit that our condition \eqref{DefEigenfunction} is
replaced by the weaker condition that, for $j\in{\mathbb N}$,
one has $\lambda_j\int_0^1 K(x,y) \phi_j(y) dy = \phi_j(x)$ almost everywhere in $[0,1]$.
We can justify the stronger condition \eqref{DefEigenfunction} by observing that if
$\lambda\in{\mathbb R}\backslash\{ 0\}$ and $\phi\in L^2 \left( [0,1]\right)$
are such that $\lambda\int_0^1 K(x,y) \phi(y) dy = \phi(x)$ almost everywhere in $[0,1]$,
then, given that we have \eqref{HalfHilbertSchmidt}, it follows by the Cauchy-Schwarz inequality
that the function $\phi^{\dagger} (x) := \lambda\int_0^1 K(x,y) \phi (y) dy$
is an element of $L^2 \left( [0,1]\right)$ that satisfies
$\lambda\int_0^1 K(x,y) \phi^{\dagger} (y) dy = \phi^{\dagger} (x)$ for all $x\in [0,1]$,
and has $\| \phi^{\dagger} - \phi \| = 0$, so that $\langle \phi^{\dagger} , \psi\rangle
= \langle \phi , \psi\rangle$ for all $\psi\in L^2 \left( [0,1]\right)$.
\par
In light of what has just been noted (in the last paragraph), we
make it our convention that a function $\phi$
be considered an eigenfunction of $K$
if and only if it is an element of $L^2 \bigl( [0,1]\bigr)$ that has norm
$\| \phi\| > 0$ and is such that, for some $\lambda\in{\mathbb R}$ (necessarily
an eigenvalue of $K$),
one has $\phi(x) = \lambda \int_0^1 K(x,y) \phi(y) dx$ for all $x\in [0,1]$.
\par
It is shown in \cite[Chapter~2]{Tricomi 1957} that, for kernels
such as $K$, each eigenvalue $\lambda$ has an {\it index},
$i(\lambda) := |\{ j\in{\mathbb N} : \lambda_j = \lambda\}|$,
that is finite. Thus we may follow \cite[Section~3.8~(12)]{Tricomi 1957} in assuming the eigenvalues of $K$ to be numbered in such a way that
\begin{equation}\label{EigenvalueOrder1}
0<\left| \lambda_1\right| \leq
\left| \lambda_2\right| \leq
\left| \lambda_3\right| \leq \ldots
\end{equation}
and
\begin{equation}\label{EigenvalueOrder2}
\lambda_j\geq\lambda_{j+1}
\quad\text{when $|\lambda_j|$ and $|\lambda_{j+1}|$ are equal.}
\end{equation}
With this last assumption the sequence $\lambda_1,\lambda_2,\ldots $ becomes uniquely determined: the same cannot be said of the corresponding orthonormal
sequence of eigenfunctions,
$\phi_1,\phi_2,\ldots\ $, since one can always substitute $-\phi_j(x)$ in place of $\phi_j(x)$
(while other substitutions become possible in the event of having $i(\lambda_j) \geq 2$).
\par
Aside from the connection with Mertens sums (mentioned in the first paragraph of this section),
another reason for studying the eigenfunctions of $K$
is that there is a connection between this kernel and Riemann's zeta-function, $\zeta(s)$.
In order to make this connection apparent we begin by observing that, if $f$ is a continuous
real valued function on $[0,1]$ that satisfies
\begin{equation*}
\int_0^1 \frac{|f(x)| dx}{x} < \infty\;,
\end{equation*}
then, by application of the most rudimentary form of the Euler-Maclaurin summation formula
\cite[Theorem~7.13]{Apostol 1974}, it may be established that when $0<x\leq 1$ one has:
\begin{equation}\label{K_action=EuMacErr}
\int_0^1 K(x,y) f(y) dy
= \sum_{n>\frac1x} F\left(\frac{1}{n x}\right)
- \int_{\frac1x}^{\infty} F\left(\frac{1}{\nu x}\right) d\nu
+ F(1) K(1,x) \;,
\end{equation}
where $F(z):=\int_0^z f(y) dy\,$ ($0\leq z\leq 1$).
In particular, when $f(x):= x^s\,$ ($0\leq x\leq 1$) and $s$ is
any complex constant satisfying ${\rm Re}(s) >0$, one finds (by \eqref{K_action=EuMacErr})
that
\begin{equation}\label{ZetaConnect-1}
(s+1)x^{s+1} \int_0^1 K(x,y) y^s dy
=\zeta(s+1) - \sum_{n\leq\frac1x}\frac{1}{n^{s+1}} - \frac{x^s}{s}
+ x^{s+1} K(1,x)
\end{equation}
for $0<x\leq 1$. The novelty here lies in the presentation (not the content)
of this result: see for example \cite[Equation~(3.5.3)]{Titchmarsh 1986}, which is
equivalent to \eqref{ZetaConnect-1} in the special case where $1/x\in{\mathbb N}$.
Similarly to what is observed in \cite[Section~3.5]{Titchmarsh 1986},
one may deduce, by analytic continuation from the half plane ${\rm Re}(s)>0$,
that \eqref{ZetaConnect-1} holds for all $s\in{\mathbb C} -\{ 0\}$
satisfying the condition ${\rm Re}(s) > -1$.
\par
Though it is somewhat peripheral to our present discussion, we
remark that, since it is known that $\zeta(1+s) = s^{-1} + \gamma + O\left( |s|\right)$ for $|s|\leq 1\,$
(where $\gamma = 0{\cdot}5772\ldots\ $ is Euler's constant),
one may deduce from \eqref{ZetaConnect-1} that
\begin{equation*}
x \int_0^1 K(x,y) y^0 dy
=\gamma - \sum_{n\leq\frac1x}\frac{1}{n} + \log\left(\frac1x\right) + x K(1,x)
\qquad\text{($0<x\leq 1$)} .
\end{equation*}
Given that $\zeta(0)=-\frac12$ and $\zeta'(0)=-\frac12 \log(2\pi)$,
one may (similarly) deduce from \eqref{ZetaConnect-1} and \eqref{DefK} that
\begin{multline}\label{Stirling_Alt}
\lim_{s\rightarrow (-1)+} \int_0^1 K(x,y) y^s dy \\
=\log\left( \left\lfloor 1/x \right\rfloor !\right)
- \left\lfloor 1/x\right\rfloor \log\left(1/x\right)
+ (1/x) - \log\sqrt{2\pi / x}
\end{multline}
for $0<x\leq 1$. A well-known result closely related to this is Stirling's formula
\cite[Equations~(B.25) and~(B.26)]{Montgomery and Vaughan 2007}.
With the help of Stirling's formula one can show that
\eqref{Stirling_Alt} would remain valid if the limit that occurs on its left-hand side
were to be replaced with the improper Riemann integral
$\lim_{\varepsilon\rightarrow 0+} \int_{\varepsilon}^1 K(x,y) y^{-1} dy$.
\par
An alternative way to connect $K(x,y)$ with $\zeta(s)$ begins with the observation
in \cite[Section~1]{Watt preprint-b} to the effect that if $f$ is a measurable
complex valued function defined on $[0,1]$ that satisfies $\int_0^1 \left| f(y)\right|^2 dy <\infty$,
and if one puts
\begin{equation*}
g(x):=\int_0^1 K(x,y)f(y) dy\quad\text{for $0\leq x\leq 1$} ,
\end{equation*}
while taking $F$, $G$ and $h$ to be the functions on $[0,\infty)$ satisfying
\begin{equation*}
\sqrt{x}\cdot \left( f(x) , g(x) , K( 1 , x)\right) = \left( F(v) , G(v) , h(v)\right) \in{\mathbb C}^3
\qquad\text{($0<x=e^{-v}\leq 1$)} ,
\end{equation*}
then one will have both
\begin{equation} \label{Hankel_Op}
G(u) = \int_0^{\infty} h(u+v) F(v) dv = \left( \Gamma_h F\right) (u) \quad\text{(say)} ,
\end{equation}
for $0\leq u <\infty$, and $\int_0^\infty \left| F(v)\right|^2 dv = \int_0^1 \left| f(y)\right|^2 dy <\infty$.
Note that \eqref{Hankel_Op} implicitly defines $\Gamma_h$ to be
a certain Hankel operator on the space of complex valued functions that
are square integrable on $[0,\infty)$. Researchers investigating
such operators have found it useful to
consider the Laplace transform of the relevant kernel function:
see, for example \cite[Chapter~4]{Partington 1988}.
In our case the relevant kernel function is $h$. A connection with
$\zeta(s)$ therefore arises due to our having:
\begin{align*}
\left( {\mathcal L} h\right) \left( s - {\textstyle\frac12}\right)
&:= \int_0^{\infty} h(v) e^{-\left( s - \frac12\right) v} dv \\
&=\int_0^{\infty} K\left( 1 , e^{-v}\right) e^{- s v} dv \\
&=\int_0^1 K(1,y) y^{s - 1} dy =\frac{\zeta(s) - \frac{1}{s-1} - \frac12}{s} =\frac{\zeta(s) - \zeta(0)}{s} - \frac{1}{s-1}
\end{align*}
for ${\rm Re}(s) > 0$
(the penultimate equality here following by virtue of \eqref{ZetaConnect-1}, with
$s-1$ substituted for $s$).
\par
A third indication of a connection between $K(x,y)$ and $\zeta(s)$ is
implicit in \cite[Equations~(36), (37) and~(41)]{Huxley and Watt 2018}.
This connection, and the other two (discussed above) are all
closely linked: they share a common origin.
\par
The connections just noted between $K(x,y)$ and $\zeta(s)$
play no part in the remainder of this paper, but we do have some hope that
a worthwhile application of one of them may eventually be found:
it might (for example) be the case that interesting results
concerning the eigenfunctions of $K(x,y)$ can be deduced from known properties
of $\zeta(s)$.
\par
In this paper we employ only methods from classical real analysis
(including some of Lebesgue's theory of integration) together with certain elements
of the general theory of square integrable symmetric integral kernels
(our primary reference for this theory being \cite{Tricomi 1957}).
We have aimed to answer some
basic questions concerning the eigenfunctions of $K$.
We shall show, for example, that the eigenfunctions of $K$
are continuous on $[0,1]$: this is Theorem~2.10.
In Theorem~4.1 we find that the eigenfunctions of
$K$ are differentiable at any point $x\in (0,1)$
that is not the reciprocal of a positive integer.
That theorem also supplies a useful formula for the first derivative of any eigenfunction.
In Theorem~4.13 we show, in effect, that if $\phi$ is an eigenfunction of $K$,
then the function $x\mapsto x \phi' (x) + \frac12 \phi(x)$ is a solution of
a particular integral equation with kernel $K(x,y)$.
In the latter part of Section~4, we obtain
(via a well-known theorem of Hilbert and Schmidt) certain corollaries of Theorem~4.13:
these corollaries have interesting further consequences, which we intend to discuss
in another paper (currently in preparation).
\par
In Section~5 (the final section of the paper) we show that the behaviour of
any eigenfunction of $K$ approximates that of a certain very simple
oscillatory function on any neighbourhood $[0,\varepsilon)$ of the point $x=0$ that is sufficiently
small (in terms of the relavant eigenvalue). Our main results there are Theorems~5.4 and~5.11.
\par
In addition to the above mentioned results,
we also obtain a number of upper bounds for the `sizes' of eigenfunctions and their first
derivatives: see, in particular, \eqref{PhiBoundedUniformly} and Theorems~3.2, 4.6, 4.9 and~4.11.
We think it likely that, with more work, and some new ideas, it should
be possible to significantly improve upon all of these bounds
(and, as a consequence, improve upon Corollary~4.10 also).
\par
In Lemmas~2.3, 2.4, 5.3, 5.5 and~5.6, Theorems~2.7, 2.9 and~2.11, and
Corollaries~2.8 and~2.12, we obtain certain results concerning
the iterated kernel $K_2 (x,y)$ defined at the start of the next section.
Most of these results are required for use in other proofs, but
some were included in this paper due to their own intrinsic interest.
The function $K_2 (x,y)$ is, in our opinion, interesting enough to merit further study:
our Remarks following the proof of Lemma~2.4 are connected with this matter.
\section{Continuity}
\begin{definitions}
Following \cite{Tricomi 1957}, we define
\begin{equation}\label{K2symmetrically}
K_2 (x,y) :=
\int_0^1 K(x,z) K(z,y) dz =
\int_0^1 K(x,z) K(y,z) dz ,
\end{equation}
for $0\leq x,y\leq 1$.
\end{definitions}
\par
Like $K$, the function $K_2$ is real-valued, measurable and square-integrable on $[0,1]\times [0,1]$.
The final equality in \eqref{K2symmetrically}
holds by virtue of $K$ being symmetric:
we deduce from it that $K_2$ is a symmetric integral kernel.
\par
We shall need to make use of the fact that any eigenfunction of $K$ is also an
eigenfunction of $K_2$. In particular, when $\phi$ is an eigenfunction of $K$,
and $\lambda$ the associated eigenvalue, one has:
\begin{equation}\label{K2Eigenfunction}
\phi (x) = \lambda^2 \int_0^1 K_2 (x,y) \phi (y) dy
\qquad\text{($0\leq x\leq 1$).}
\end{equation}
To verify this, observe that, since
that $K$ is both measurable and bounded on $[0,1]\times [0,1]$,
while $\phi$ is an element of $L^2\bigl( [0,1]\bigr)$
satisfying $\lambda \int_0^1 K(z,y) \phi (y) dy = \phi(z)$ for $0\leq z\leq 1$,
it therefore follows by \eqref{HalfHilbertSchmidt} and Fubini's theorem that, for $0\leq x\leq 1$, one has
\begin{align*}
\phi (x) &= \lambda \int_0^1 K(x,z)\left( \lambda \int_0^1 K(z,y) \phi (y) dy\right) dz \\
&= \lambda^2 \int_0^1 \left( \int_0^1 K(x,z) K(z,y)dz\right) \phi (y) dy ,
\end{align*}
and so (see the definition \eqref{K2symmetrically})
the result \eqref{K2Eigenfunction} is obtained.
\par
In preparation for our first application of \eqref{K2Eigenfunction},
which comes in the proof of Theorem~2.10 (below), we work on adding to what we know about $K_2$.
\begin{definitions}
For $n\in{\mathbb N}$ we define $\widetilde B_n (t)$,
the {\it $n$-th periodic Bernouilli function}, by:
\begin{equation*}
\widetilde B_n (t) := B_n\left( \{ t\}\right)
\quad\text{($t\in{\mathbb R}$),}
\end{equation*}
where $B_n (x)$ is the Bernouilli polynomial of degree $n$
(the definition of which may be found in \cite[Section~24.2]{Olver et al. 2010}). In particular,
\begin{equation}\label{DefTildeB1andB2}
\widetilde B_1 (t) := \{ t\} - {\textstyle\frac{1}{2}}\quad\text{and}\quad\widetilde B_2 (t) := \{ t\}^2 -\{ t\} + {\textstyle\frac{1}{6}}
\quad\text{($t\in{\mathbb R}$),}
\end{equation}
and so (given \eqref{DefK}) we have:
\begin{equation}\label{KtoTildeB1}
\widetilde B_1 \left( \frac{1}{w}\right) =
- K(x,y)
\quad\text{($0<x,y\leq 1$ and $xy=w$).}
\end{equation}
\end{definitions}
\begin{lemma}
For $0<x,y\leq 1$, one has
\begin{align*}
K_2 (x,y) &=
- {\textstyle\frac{1}{2}} x \widetilde B_2 \left( \frac{1}{x}\right) \widetilde B_1 \left( \frac{1}{y}\right)
+\frac{1}{x} \int_{\frac{1}{x}}^{\infty} \widetilde B_2 (t) \widetilde B_1 \left( \frac{x t}{y}\right) \frac{dt}{t^3} \\
&\phantom{{=}} -
\frac{1}{2y} \int_{\frac{1}{x}}^{\infty} \widetilde B_2 (t) \frac{dt}{t^2} +
\frac{x}{2y^2}
\sum_{m > \frac{1}{y}} \frac{\widetilde B_2 \left( \frac{my}{x}\right)}{m^2} .
\end{align*}
\end{lemma}
\begin{proof}
Let $0< x,y\leq 1$. By \eqref{K2symmetrically} and \eqref{KtoTildeB1},
\begin{equation*}
K_2(x,y) = \int_{0+}^1 \widetilde B_1 \left( \frac{1}{xz}\right) \widetilde B_1 \left( \frac{1}{yz}\right) dz =
\frac{1}{x}\int_{\frac{1}{x}}^{\infty} \widetilde B_1 (t) \widetilde B_1 \left( \frac{xt}{y}\right) \frac{dt}{t^2} .
\end{equation*}
Since $\int_a^b \widetilde B_1 (t) dt = \frac{1}{2}\widetilde B_2 (b) - \frac{1}{2}\widetilde B_2 (a)$
for $a,b\in{\mathbb R}$, it follows from the above equations that
\begin{align*}
K_2(x,y) &=
\frac{1}{2x}\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
t^{-2} \widetilde B_1 \left( \frac{xt}{y}\right) d\widetilde B_2 (t) \\
&= \frac{1}{2x}\left(
\left[ t^{-2} \widetilde B_1 \left( \frac{xt}{y}\right) \widetilde B_2 (t)\right]_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty} -
\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) d\left( t^{-2} \widetilde B_1 \left( \frac{xt}{y}\right)\right)\right)
\end{align*}
(the latter equality being obtained through integration by parts).
By \eqref{DefTildeB1andB2} we have here
$t^{-2} \widetilde B_1 \left( \frac{xt}{y}\right) \widetilde B_2 (t) \rightarrow 0$ as
$t\rightarrow\infty$; since the function $t\mapsto\{ t\}$ is right-continuous, we have also
$t^{-2} \widetilde B_1 \left( \frac{xt}{y}\right) \widetilde B_2 (t) \rightarrow
x^2 \widetilde B_1 \left( \frac{1}{y}\right) \widetilde B_2 \left( \frac{1}{x}\right)$ as
$t\rightarrow \left( \frac{1}{x}\right){\scriptstyle +}$. We have, moreover,
\begin{multline*}
\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) d\left( t^{-2} \widetilde B_1 \left( \frac{xt}{y}\right)\right) \\
= \int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) \widetilde B_1 \left( \frac{xt}{y}\right) d\left( t^{-2} \right) +
\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) t^{-2} d\left( \widetilde B_1 \left( \frac{xt}{y}\right)\right) \\
= -2 \int_{\frac{1}{x}}^{\infty}
\widetilde B_2 (t) \widetilde B_1 \left( \frac{xt}{y}\right) t^{-3} dt +
\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) t^{-2} d\left\{ \frac{xt}{y}\right\}
\end{multline*}
and
\begin{align*}
\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) t^{-2} d\left\{\frac{xt}{y}\right\}
&= \int_{\frac{1}{x}}^{\infty}
\widetilde B_2 (t) t^{-2} d\left( \frac{xt}{y}\right) -
\int_{\left( \frac{1}{x}\right){\scriptscriptstyle +}}^{\infty}
\widetilde B_2 (t) t^{-2} d\left\lfloor \frac{xt}{y}\right\rfloor \\
&=\frac{x}{y} \int_{\frac{1}{x}}^{\infty} \widetilde B_2 (t) t^{-2} dt -
\sum_{m > \frac{1}{y}} \widetilde B_2 \left( \frac{ym}{x}\right) \left( \frac{ym}{x}\right)^{-2} ,
\end{align*}
and so we obtain what is stated in the lemma
\end{proof}
\begin{lemma}
When $0<x,y\leq 1$, one has:
\begin{equation}\label{K2OverxBound}
\frac{\left| K_2 (x,y)\right|}{x} \leq
\frac{1}{12} +
\frac{x}{\left( 36 \sqrt{3}\right) y} +
\frac{1}{2 y^2} \Biggl| \sum_{m > \frac{1}{y}}
\frac{\widetilde B_2 \left( \frac{my}{x}\right)}{m^2}
\Biggr| ,
\end{equation}
\begin{equation}\label{K2BoundOffDiagonal}
\left| K_2 (x,y)\right| \leq
\left({\textstyle\frac{1}{4} + \frac{1}{36 \sqrt{3}}}\right)
\cdot
\frac{\min\{ x , y\}}{\max\{ x , y\}}
\end{equation}
and
\begin{equation} \label{K2BoundOnDiagonal}
\left| K_2 (x,x) - {\textstyle\frac{1}{12}} \right| \leq
\left({\textstyle\frac{1}{6} + \frac{1}{36 \sqrt{3}}}\right)\cdot x .
\end{equation}
\end{lemma}
\begin{proof}
The result \eqref{K2OverxBound} follows from Lemma~2.3 by applying the triangle inequality and
then observing that one has:
\begin{equation*}
\left| \widetilde B_2 \left( \frac{1}{x}\right) \widetilde B_1 \left( \frac{1}{y}\right)\right|
\leq {\textstyle\left(\frac{1}{6}\right)\left(\frac{1}{2}\right) = \frac{1}{12}} ,
\end{equation*}
\begin{equation*}
\biggl|\int_{\frac{1}{x}}^{\infty} \widetilde B_2 (t)
\widetilde B_1 \left( \frac{x t}{y}\right) t^{-3} dt \biggr|
\leq\int_{\frac{1}{x}}^{\infty} {\textstyle\frac{1}{12}} t^{-3} dt = {\textstyle\frac{1}{24}} x^2
\end{equation*}
and
\begin{align*}
\int_{\frac{1}{x}}^{\infty} \widetilde B_2 (t) \frac{dt}{t^2}
&= {\textstyle\frac{1}{3}} \int_{\frac{1}{x}}^{\infty} t^{-2} d\widetilde B_3 (t) \\
&= {\textstyle\frac{1}{3}} \left(
\left[ t^{-2} B_3 (t)\right]_{\frac{1}{x}}^{\infty}
- \int_{\frac{1}{x}}^{\infty} B_3 (t) d\left( t^{-2}\right) \right) ,
\end{align*}
where $B_3(t) = \{ t\}^3 - \frac{3}{2} \{ t\}^2 + \frac{1}{2} \{ t\}$, so that
$\max_{t\in{\mathbb R}} \left| B_3(t)\right| = \frac{1}{12\sqrt{3}}$ and
\begin{equation*}
\biggl| \int_{\frac{1}{x}}^{\infty} B_2 (t) \frac{dt}{t^2} \biggr|
\leq {\textstyle\frac{1}{36\sqrt{3}}} \left( \left(\frac{1}{x}\right)^{-2}
+ \int_{\infty}^{\frac{1}{x}} d\left( t^{-2}\right) \right) = {\textstyle\frac{1}{18\sqrt{3}}} x^2 .
\end{equation*}
\par
We consider next \eqref{K2BoundOffDiagonal}. Since both sides of this result are
invariant under the permutation $(x,y)\mapsto (y,x)$, we may assume (in our proof of it) that
$0<x\leq y\leq 1$. By \eqref{K2OverxBound} and the uniform bound $\left| \widetilde B_2 (t)\right| \leq\frac{1}{6}$,
we find that
\begin{align*}
\left| K_2 (x,y)\right|\cdot \frac{y}{x} &\leq
\frac{y}{12} + \frac{x}{\left( 36 \sqrt{3}\right)} +
\frac{1}{12 y} \sum_{m > \frac{1}{y}} \frac{1}{m^2} \\
& \leq \frac{1}{12} + \frac{1}{36 \sqrt{3}} +
\frac{1}{12 y} \cdot\left( y^2 + y\right) = \frac{2 + y}{12} + \frac{1}{36 \sqrt{3}}.
\end{align*}
The required case ($0<x\leq y\leq 1$) of \eqref{K2BoundOffDiagonal} follows.
\par
In order to obtain \eqref{K2BoundOnDiagonal} (and so complete the proof of the corollary)
we note firstly that our proof of \eqref{K2OverxBound} shows, in fact, that one has
\begin{equation*}
\biggl| \frac{K_2 (x,y)}{x} - \frac{1}{2 y^2} \sum_{m > \frac{1}{y}}
\frac{\widetilde B_2 \left( \frac{my}{x}\right)}{m^2} \biggr| \leq
\frac{1}{12} + \frac{x}{\left( 36 \sqrt{3}\right) y}
\quad\text{($0<x,y\leq 1$).}
\end{equation*}
By specialising this to the case in which $y=x\in (0,1]$, and then noting that
\begin{equation*}
\sum_{m>\frac{1}{x}} \frac{\widetilde B_2 (m)}{m^2} = \sum_{m>\frac{1}{x}} {\textstyle\frac{1}{6}} m^{-2}
\in \left[ {\textstyle\frac{1}{6}} \left( x - x^2 \right) ,
{\textstyle\frac{1}{6}} \left( x + x^2 \right) \right] ,
\end{equation*}
one arrives at the bound
$\left| K_2(x,x) - \frac{1}{12}\right| /x \leq \frac{1}{6} + \frac{1}{36 \sqrt{3}}$.
The result \eqref{K2BoundOnDiagonal} follows.
\end{proof}
\begin{remarks} With regard to the above estimate \eqref{K2BoundOnDiagonal}, it should
be noted that, by a method entirely different from
the methods used in the proofs of Lemmas~2.3 and~2.4, it can be shown that one has
\begin{equation}\label{K2xxactly}
K_2 (x,x) = K^2 (1,x)
+ \frac{\left\lfloor \frac1x\right\rfloor \log\left(\frac1x\right)
- \frac1x + \log\sqrt{\frac{2\pi}{x}}
- \log\left( \left\lfloor \frac1x \right\rfloor !\right)}{\left( \frac{x}{2}\right)} \;,
\end{equation}
for $0<x\leq 1$.
We believe that the same method will also yield an interesting formula
for $K_2 (x,\alpha x)$ in the more general case where one has
$\alpha\in{\mathbb Q}$ and $0<x,\alpha x\leq 1$.
Notice that, by \eqref{K2xxactly} and what was noted just after \eqref{Stirling_Alt},
one has
\begin{equation*}
{\textstyle\frac12} x \left( K^2 (1,x) - K_2 (x,x)\right)
= \lim_{\varepsilon\rightarrow 0+} \int_{\varepsilon}^1 \frac{K(x,y) dy}{y} \;,
\end{equation*}
for $0<x\leq 1$. Having obtained this result via a somewhat indirect route, we
are curious to know if there exists a more direct proof of it.
\end{remarks}
\begin{definitions}
For $r\in (-1,\infty)$ and $a,b\in [0,1]$, we put
\begin{equation*}
\Delta_r (a,b) :=
\int_0^1 \left| K(a,z) - K(b,z)\right| z^r dz
\end{equation*}
(the existence of this integral following from \eqref{HalfHilbertSchmidt}, for $p=1$,
combined with the fact that $K$ is bounded on $[0,1]\times [0,1]$).
\end{definitions}
Clearly $\Delta_r (b,b)=0$ for $r\in (-1,\infty)$ and $0\leq b\leq 1$.
We have also the following lemma.
\begin{lemma}
Let $0<a_0\leq 1$. Suppose, moreover, that $0 < a_n \leq 1$ for all $n\in {\mathbb N}$, and
that one has $\lim_{n\rightarrow\infty} a_n = a_0$.
Then, for all $r\in (-1,\infty)$,
one has $\lim_{n\rightarrow\infty} \Delta_r \left( a_n , a_0\right) = 0$.
\end{lemma}
\begin{proof}
Suppose that $r>-1$.
It follows from Definitions~2.5 and Equation~\eqref{DefK}, via a couple of changes of
the variable of integration, that, for $n\in{\mathbb N}$, one has
\begin{align}\label{Beweis-a1}
\Delta_r\left( a_n , a_0\right) &= \int_0^1 \left| K\left( a_n z , 1\right) - K\left( a_0 z , 1\right) \right| z^r dz
\nonumber\\
&= \int_0^{\infty} \left| K\left( e^{-(u+A_n)} , 1\right) - K\left( e^{-(u+A_0)} , 1\right) \right|
e^{-(r+1)u} du\nonumber\\
&= \frac{1}{a_0^{r+1}} \int_{A_0}^{\infty} \left| e^{(r+1)\delta_n} f_r\left( t + \delta_n\right) - f_r(t)\right| dt ,
\end{align}
where $A_m = \log\left( 1/a_m\right)\in [0,\infty)$ ($m=0,1,2,\ldots\ $),
$\delta_n = A_n - A_0\in{\mathbb R}$ ($n=1,2,3,\ldots\ $)
and $f_r$ is the function defined on ${\mathbb R}$ by:
\begin{equation}\label{Beweis-a2}
f_r(t) := \begin{cases}
e^{-(r+1)t} K\left( e^{-t} , 1\right) & \text{if $t\geq 0$} , \\ 0 & \text{otherwise} .
\end{cases}
\end{equation}
\par
By another change of variable, it follows from \eqref{Beweis-a2} and
\eqref{DefK} that one has
\begin{equation*}
\int_{-\infty}^{\infty} \left| f_r(t)\right| dt = \int_0^1 \left| K(x,1)\right| x^r dx < \infty
\end{equation*}
(given that $r>-1$), so that $f_r$ is Lebesgue integrable on ${\mathbb R}$ (i.e. $f_r\in L^1 ({\mathbb R})$).
\par
We observe now that, by the triangle inequality, it follows from
\eqref{Beweis-a1} that one has
\begin{equation}\label{Beweis-a3}
0 \leq \Delta_0 \left( a_n , a_0\right) \leq \frac{c_n (r) + d_n (r)}{a_0^{r+1}}
\qquad\text{($n\in{\mathbb N}$),}
\end{equation}
where:
\begin{equation}\label{Beweis-a4}
0\leq c_n (r) = \left| e^{(r+1)\delta_n} - 1 \right| \cdot \int_{A_0}^{\infty}
\left| f_r\left( t + \delta_n\right)\right| dt
\leq \left| e^{(r+1)\delta_n} - 1 \right| \cdot \int_{-\infty}^{\infty} \left| f_r (t)\right| dt
\end{equation}
and
\begin{equation}\label{Beweis-a5}
0\leq d_n (r) = \int_{A_0}^{\infty} \left| f_r\left( t + \delta_n\right) - f_r(t)\right| dt
\leq \int_{-\infty}^{\infty} \left| f_r\left( t + \delta_n\right) - f_r(t)\right| dt .
\end{equation}
Since $\lim_{n\rightarrow\infty} a_n = a_0 > 0$, we have here
$\lim_{n\rightarrow\infty} A_n = \log\left( 1 / a_0\right) = A_0$,
so that $\lim_{n\rightarrow\infty} \delta_n = A_0 - A_0 = 0$.
Therefore, given that $f_r$ is independent of $n$, and satisfies $f_r\in L^1 ({\mathbb R})$,
it follows by \eqref{Beweis-a5} and the case $p=1$ of
\cite[Theorem~8.19]{Wheeden and Zygmund 2015} that we have $\lim_{n\rightarrow\infty} d_n (r) = 0$.
Moreover, since $0=\exp\left( \lim_{n\rightarrow\infty} (r+1)\delta_n\right) - 1
= \lim_{n\rightarrow\infty} \left( \exp\left( (r+1)\delta_n\right) - 1\right)$,
it follows from \eqref{Beweis-a4} that we have $\lim_{n\rightarrow\infty} c_n (r) = 0$.
By \eqref{Beweis-a3} and our last two findings, we can deduce (as was required) that
$\Delta_r \left( a_n , a_0\right) \rightarrow 0$ as $n\rightarrow\infty$.
\end{proof}
\begin{theorem}
The function $K_2$ is continuous on
$\left( [0,1]\times [0,1]\right) \backslash \left\{ (0,0)\right\}$.
\end{theorem}
\begin{proof}
Since the kernel function $K_2 (x,y)$ is symmetric, it will be
enough to show that it is continuous on $[0,1]\times (0,1]$.
Note, moreover, that the definitions \eqref{DefK} and \eqref{K2symmetrically} imply that
$K_2(0,y)$ is constant for $0\leq y\leq 1$, and so
we need only show that one has $K_2(x_n,y_n)\rightarrow K_2(x,y)$ as $n\rightarrow\infty$,
whenever it is the case that $(x_1,y_1),(x_2,y_2),(x_3,y_3),\ldots $
is a sequence of points in $(0,1]\times (0,1]$
that converges (with respect to the Euclidean metric) to a limit $(x,y)\in [0,1]\times (0,1]$.
In the case just described one necessarily has both $x_n\rightarrow x\in [0,1]$
and $y_n\rightarrow y\in (0,1]$,
in the limit as $n\rightarrow\infty$.
By \eqref{K2symmetrically}, one has, moreover,
\begin{align}\label{K2Sandwich}
\left| K_2 \left( x_n,y_n \right) - K_2 (x,y)\right| &\leq
\left| K_2 \left( x_n,y_n \right) - K_2 \left( x ,y_n \right)\right| +
\left| K_2 \left( x , y_n \right) - K_2 (x,y) \right| \nonumber\\
&\leq \int_0^1 \left| K\left( x_n , z \right) - K(x,z)\right|\cdot \left| K\left( y_n , z\right)\right| dz \nonumber\\
&\phantom{{\leq}} +
\int_0^1 \left| K\left( x , z\right)\right| \cdot \left| K\left( y_n , z \right) - K(y,z)\right| dz \nonumber\\
&\leq {\textstyle\frac{1}{2}} \Delta_0 \left( x_n , x\right) + {\textstyle\frac{1}{2}} \Delta_0 \left( y_n , y\right)
\end{align}
(the last inequality following since, by \eqref{DefK},
$K$ has range $( -\frac{1}{2} , \frac{1}{2} ]$).
By application of the case $r=0$ of Lemma~2.6, we find that when $x,y\in (0,1]$ one has both
$\Delta_0 \left( x_n , x\right)\rightarrow 0$ and $\Delta_0 \left( y_n , y\right)\rightarrow 0$,
as $n\rightarrow\infty$. By this and \eqref{K2Sandwich},
it follows that, when $x,y\in (0,1]$, one does have
$\lim_{n\rightarrow\infty} K_2 \left( x_n,y_n \right) = K_2 (x,y)$ (as required).
\par
In the remaining cases, where $x=0$ and $y\in (0,1]$, we note
that we have $K_2 (x,y) = K_2 (0,y) = 0$, so that this proof will be complete
once we are able to show that $K_2\left( x_n , y_n\right) \rightarrow 0$ as $n\rightarrow\infty$.
With this in mind, we observe (firstly) that
we have here $\lim_{n\rightarrow\infty} x_n / y_n = x / y = 0 / y = 0$,
and (secondly) that the result \eqref{K2BoundOffDiagonal} of Lemma~2.4 implies that
$\left| K_2 \left( x_n , y_n\right)\right| < x_n / y_n$ for all $n\in{\mathbb N}$.
This shows that
$\lim_{n\rightarrow\infty} K_2 \left( x_n , y_n\right) = 0$,
so the proof is complete.
\end{proof}
\begin{corollary}
For any constant $a\in[0,1]$, the functions $y\mapsto K_2 (a,y)$ and $y\mapsto K_2 (y,a)$
are continuous on $[0,1]$.
\end{corollary}
\begin{proof}
The cases with $0<a\leq 1$ follow immediately from Theorem~2.7:
as for the remaining case, where one has $a=0$, it is enough that we observe
that one has $K_2 (0,y) = K_2 (y,0)=0$ for $0\leq y\leq 1$.
\end{proof}
\begin{theorem}
The function $K_2 : [0,1]\times[0,1]\rightarrow {\mathbb R}$ is not continuous at the point $(0,0)$.
Furthermore, the set of functions $f : [0,1]\times[0,1]\rightarrow {\mathbb R}$
such that the set ${\cal I}_f := \{ (x,y) \in [0,1]\times [0,1] : f(x,y) = K_2 (x,y)\}$ is dense in $[0,1]\times [0,1]$ does not contain one that is continuous at the point $(0,0)$.
\end{theorem}
\begin{proof}
For $f=K_2$, one has ${\cal I}_f = [0,1]\times [0,1]$. The first part of the theorem
is therefore implied by the second part, and so a proof of the second part is
all that is required.
\par
We adopt the method of `proof by contradiction'. Suppose that the
second part of the theorem is false. There must then exist a function
$f : [0,1]\times[0,1]\rightarrow {\mathbb R}$ that is continuous at $(0,0)$ and that (at the same time)
satisfies $f(x,y)=K_2(x,y)$ for a set of points $(x,y)$ that is dense in $[0,1]\times[0,1]$.
It follows from the latter part of this that if $\alpha\in (0,1]$ and $(\varepsilon_n)$ is an infinite
sequence of positive numbers, then there exists, for each $n\in{\mathbb N}$,
some pair of real numbers $x_n,y_n$ satisfying both
\begin{equation}\label{near1overN}
\frac{1}{n+\varepsilon_n} < x_n < \frac{1}{n}
\quad \text{and}\quad
\frac{\alpha}{n+\varepsilon_n} < y_n < \frac{\alpha}{n}
\end{equation}
and
\begin{equation}\label{fEqualK2}
f\left( x_n , y_n\right) = K_2\left( x_n , y_n\right) .
\end{equation}
\par
Let $\alpha$ satisfy $0<\alpha\leq 1$.
By Theorem~2.7 the function $K_2$ is continuous at each point in
the sequence
$(1 , \alpha ),(\frac{1}{2},\frac{\alpha}{2}),(\frac{1}{3},\frac{\alpha}{3}),\ldots $ .
Therefore, for each $n\in{\mathbb N}$, there exists some number $\varepsilon_n > 0$ such
that one has
\begin{equation}\label{K2near1overN}
(x,y)\in \left( \frac{1}{n+\varepsilon_n} , \frac{1}{n} \right) \times
\left(\frac{\alpha}{n+\varepsilon_n} , \frac{\alpha}{n} \right)
\Longrightarrow
\left| K_2 (x,y) - K_2\left( \frac{1}{n} , \frac{\alpha}{n}\right)\right| < \frac{1}{n} .
\end{equation}
Thus (bearing in mind the conclusions of the previous paragraph) we deduce the existence of
sequences, $\varepsilon_1,\varepsilon_2,\varepsilon_3,\ldots $ and
$(x_1,y_1),(x_2,y_2),(x_3,y_3),\ldots $, such that when $n\in{\mathbb N}$ one has
both $\varepsilon_n > 0$ and what is stated in \eqref{near1overN}, \eqref{fEqualK2} and
\eqref{K2near1overN}. Considering now any one such choice of this pair of sequences,
it follows by \eqref{near1overN}-\eqref{K2near1overN} that, for all $n\in{\mathbb N}$, one has
\begin{equation*}
\left| f \left( x_n , y_n \right) - K_2\left( \frac{1}{n} , \frac{\alpha}{n}\right)\right| =
\left| K_2 \left( x_n , y_n \right) - K_2\left( \frac{1}{n} , \frac{\alpha}{n}\right)\right| < \frac{1}{n} .
\end{equation*}
Since $f$ is continuous at $(0,0)$, and since \eqref{near1overN} implies
that $(x_n,y_n)\rightarrow (0,0)$ (with respect to the Euclidean metric) as $n\rightarrow\infty$,
we have here $\lim_{n\rightarrow\infty} f(x_n,y_n) = f(0,0)$, and so (given that
$\lim_{n\rightarrow\infty} \frac{1}{n} = 0$) are able to conclude that
\begin{equation*}
f(0,0) = \lim_{n\rightarrow\infty} K_2\left( \frac{1}{n} , \frac{\alpha}{n}\right) .
\end{equation*}
Note that $\alpha$ here denotes an arbitrary point in the interval $(0,1]$,
so that it has now been established that the last equality above
holds for all $\alpha\in (0,1]$.
By considering the special case $\alpha = 1$,
we deduce that
\begin{equation*}
f(0,0) = \lim_{n\rightarrow\infty} K_2\left( \frac{1}{n} , \frac{1}{n}\right)
= \lim_{n\rightarrow\infty} {\textstyle\frac{1}{12}} \cdot \left( 1 + O\left( n^{-1}\right)\right)
= {\textstyle\frac{1}{12}}
\end{equation*}
(the middle equality here holding by virtue of \eqref{K2BoundOnDiagonal}).
Therefore, for each fixed choice of $\alpha\in (0,1]$,
we have $\lim_{n\rightarrow\infty} K_2\left( n^{-1} , \alpha n^{-1}\right) = \frac{1}{12}$.
The result \eqref{K2BoundOffDiagonal}, however, shows that one has
$\left| K_2\left( \frac{1}{n} , \frac{\alpha}{n}\right)\right|
\leq \bigl( \frac{1}{4} + \frac{1}{36 \sqrt{3}}\bigr) \alpha < \frac{4}{15}\alpha$
for $0<\alpha\leq 1$, $n\in{\mathbb N}$. In particular, when $\alpha = \frac{5}{16}$ (for example),
one has
$\frac{1}{12}=\frac{4}{15}\alpha > \sup\bigl\{ K_2\left( \frac{1}{n} , \frac{\alpha}{n}\right) : n\in{\mathbb N}\bigr\} $.
This is incompatible with our earlier finding that
$\lim_{n\rightarrow\infty} K_2\left( n^{-1} , \alpha n^{-1}\right) = \frac{1}{12}$
if $0<\alpha\leq 1$. In light of the contradiction evident here,
we are left with no option but to conclude that
the second part of the theorem cannot be false; we have therefore shown it to be (instead) true, which
is all that we need to complete this proof.
\end{proof}
\begin{theorem} All eigenfunctions of $K$ (including, in particular, the functions
$\phi_1,\phi_2,\phi_3,\ldots $) are continuous on $[0,1]$.
\end{theorem}
\begin{proof}
It will be enough to show that one has
\begin{equation}\label{Beweis-b1}
\lim_{n\rightarrow\infty} \phi\left( x_n\right) = \phi\left( x_0\right)
\end{equation}
if $\phi$ is an eigenfunction of $K$ and
$x_0,x_1,x_2,\ldots\ $ a sequence of elements of $[0,1]$ satisfying $x_n\rightarrow x_0$
as $n\rightarrow\infty$. Accordingly, we suppose now that the conditions just mentioned
(after \eqref{Beweis-b1}) are satisfied.
By \eqref{K2Eigenfunction}, we have
\begin{equation}\label{Beweis-b2}
\phi \left( x_n \right) = \int_0^1 f_n (y) dy\qquad\text{($n=0,1,2,\ldots\ $),}
\end{equation}
where $f_n (y) := \lambda^2 K_2\left( x_n , y\right) \phi (y)$, with $\lambda$
being the relevant eigenvalue of $K$.
\par
From \eqref{Beweis-b2} it follows (implicitly) that all functions in
the sequence $f_1,f_2,f_3,\ldots\ $ are measurable on $[0,1]$.
Note also that, by Corollary~2.8, we have
$K_2\left( x_0 , y\right) = K_2\left( \lim_{n\rightarrow\infty} x_n , y\right)
= \lim_{n\rightarrow\infty} K_2\left( x_n , y\right)$ for $0\leq y\leq 1$, and
so it is certainly the case that one has $\lim_{n\rightarrow\infty} f_n (y) = f_0 (y)$
almost everywhere in $[0,1]$. In view of the two points just noted, it
follows by Lebesgue's `Dominated Convergence Theorem' \cite[Theorem~5.36]{Wheeden and Zygmund 2015}
that, if there exists a function $F$ that is integrable on $[0,1]$ and
satisfies $F(y)\geq \sup\left\{ \left| f_n (y)\right| : n\in{\mathbb N}\right\}$
almost everywhere in $[0,1]$, then one will have
\begin{equation*}
\int_0^1 f_0 (y) dy = \int_0^1 \left( \lim_{n\rightarrow\infty} f_n (y)\right) dy
=\lim_{n\rightarrow\infty} \int_0^1 f_n (y) dy .
\end{equation*}
This last outcome would immediately imply, by virtue of \eqref{Beweis-b2}, that
the equality in \eqref{Beweis-b1} does indeed hold. Therefore, in order to complete
this proof, we have only to observe now that the function
$F(y) := \frac{1}{4} \lambda^2 |\phi (y)|$ is integrable over $[0,1]$
(the fact that we have $\phi\in L^2\bigl( [0,1]\bigr)$ implies this, since the interval $[0,1]$ is bounded),
and that, from the definition of $f_n$ and the bound $\left| K_2 (x,y)\right| < \frac{1}{4}$
($0\leq x,y\leq 1$), implied by \eqref{DefK} and \eqref{K2symmetrically}, it follows that the same function $F$
satisfies $F(y)\geq \sup\left\{ \left| f_n (y)\right| : n\in{\mathbb N}\right\}$ for $0\leq y\leq 1$.
\end{proof}
\begin{remarks}{\it 1)}
Let $j\in{\mathbb N}$. Then, by \eqref{DefEigenfunction} and \eqref{DefK},
one has $\phi_j (0) = 0$. Thus it follows from Theorem~2.10 that
one has $\lim_{x\rightarrow 0+} \phi_j (x) = 0$. In the next section we
discover more about how the eigenfunctions $\phi_1 (x),\phi_2 (x),\phi_3 (x), \ldots\ $
behave as $x$ tends towards $0$ from above.
\item{\it 2)} We need Theorem~2.7, Corollary~2.8 and Theorem~2.10 for the proof of our next result,
the `bilinear formula' for $K_2$. In \cite[Sections~3.9, 3.10 and~3.12]{Tricomi 1957} and
\cite[Sections~7.3 and~7.4]{Kanwal 1997} (for example), it is shown that
the bilinear formula for a kernel $k(x,y)$ is valid if the function $k$ satisfies certain conditions.
Yet, neither of these two references, nor any other that we know of, quite
manages to cover the case of our kernel $K_2$: the discontinuity of $K_2 (x,y)$ at the point
$(x,y)=(0,0)$ prevents this.
\end{remarks}
\begin{theorem} Let $0<\varepsilon\leq 1$.
Then the series
\begin{equation}\label{BilinearSeries}
\frac{\phi_1 (x) \phi_1 (y)}{\lambda_1^2} +
\frac{\phi_2 (x) \phi_2 (y)}{\lambda_2^2} +
\frac{\phi_3 (x) \phi_3 (y)}{\lambda_3^2} +\ \ldots\
\end{equation}
converges uniformly for $(x,y)\in [0,1]^2 \backslash (0,\varepsilon)^2$.
For $0\leq x,y\leq 1$, this series is absolutely convergent, and one has:
\begin{equation}\label{K2BilinearFormula}
\sum_{j=1}^{\infty} \frac{\phi_j (x) \phi_j(y)}{\lambda_j^2} =
K_2 (x,y) .
\end{equation}
\end{theorem}
\begin{proof}
Let $x_1\in [0,1]$. Put $f(y) := K_2 \left( x_1 , y\right)$,
so that for $0\leq y\leq 1$ one has
$f(y) = \int_0^1 K(y,z) g(z) dz$, where $g(z):=K(x_1,z)$.
By Corollary~2.8, the function $f$ is continuous on $[0,1]$.
Since the kernel $K$ is a measurable function on $[0,1]\times [0,1]$
that satisfies both \eqref{HSnorm} and \eqref{HalfHilbertSchmidt}, it follows
by an application \cite[Page~113]{Tricomi 1957} of the `Hilbert-Schmidt theorem'
\cite[Page~110]{Tricomi 1957} that the series \eqref{BilinearSeries}
converges, both absolutely and uniformly, for $(x,y)\in \left\{ x_1\right\}\times [0,1]$, and
that the corresponding sums, $F(y) := \sum_{j=1}^{\infty} \lambda_j^{-2} \phi_j \left( x_1\right) \phi_j (y)$
($0\leq y\leq 1$), satisfy $F(y) = f(y)$ almost everywhere in $[0,1]$, so that
the set $\left\{ y \in [0,1] : F(y) = f(y)\right\}$ is certainly dense in $[0,1]$.
\par
For $x=x_1$, each partial sum of the series \eqref{BilinearSeries}
is a linear combination of finitely many of the
eigenfunctions $\phi_1 (y),\phi_2 (y), \ldots\ $, and so, by
Theorem~2.10, is a function of $y$ that is continuous on $[0,1]$.
Therefore, given that we know these partial sums to be the terms of a sequence
converging uniformly to the limit $F(y)$ on $[0,1]$, it follows
that that limit, $F$, is continuous on $[0,1]$.
Thus, both $f$ and $F$ are continuous on $[0,1]$, so that the set
$\left\{ y \in [0,1] : F(y) = f(y)\right\}$, being dense in $[0,1]$,
must contain the interval $[0,1]$. That is, we have \eqref{K2BilinearFormula}
for $x=x_1$ and all $y\in [0,1]$.
\par
Since $x_1$ here denotes an arbitrary fixed point in the interval $[0,1]$,
it has now been established that, for $0\leq x,y\leq 1$,
the equality \eqref{K2BilinearFormula} holds and the infinite sum
occurring in \eqref{K2BilinearFormula} converges absolutely.
\par
We now have only to prove the part of the theorem
concerning uniform convergence on $[0,1]^2\backslash (0,\varepsilon)^2$.
We begin by observing that, since one has $\phi_j (x) \phi_j (y) = \phi_j (y) \phi_j (x)$
for $j\in{\mathbb N}$ and $0\leq x,y\leq 1$, it will be enough to establish
that the series \eqref{BilinearSeries} converges uniformly
for $x\in \{ 0\}\cup [\varepsilon , 1]$, $y\in [0,1]$.
We know, from the first paragraph of this proof (for example), that the series
\eqref{BilinearSeries} does converge uniformly for $x=0$ and $0\leq y\leq 1$.
All that now remains to be shown is that the series \eqref{BilinearSeries} converges uniformly
for $(x,y)\in [\varepsilon , 1]\times [0,1]$.
To this end, we note that, by the Cauchy-Schwarz inequality and \eqref{K2BilinearFormula},
\eqref{K2symmetrically} and \eqref{HalfHilbertSchmidt}, it follows that, when $N\in{\mathbb N}$, one has:
\begin{align*}
\left| \sum_{j=N+1}^{\infty} \frac{\phi_j (x) \phi_j(y)}{\lambda_j^2} \right| &\leq
\left( \sum_{j=N+1}^{\infty} \frac{\phi_j^2 (x)}{\lambda_j^2} \right)^{\frac{1}{2}}
\left( \sum_{j=N+1}^{\infty} \frac{\phi_j^2 (y)}{\lambda_j^2} \right)^{\frac{1}{2}} \\
&\leq \left( K_2 (y,y)\right)^{\frac{1}{2}}
\left( \sum_{j=N+1}^{\infty} \frac{\phi_j^2 (x)}{\lambda_j^2} \right)^{\frac{1}{2}} \leq
{\textstyle\frac{1}{2}} \left( \sum_{j=N+1}^{\infty} \frac{\phi_j^2 (x)}{\lambda_j^2} \right)^{\frac{1}{2}} ,
\end{align*}
for $0\leq x,y\leq 1$.
Therefore, all that we now have to do (in order to complete this proof) is show that the series
$\lambda_1^{-2} \phi_1^2 (x) + \lambda_2^{-2} \phi_2^2 (x) + \lambda_3^{-2} \phi_3^2 (x) + \ \ldots\ $
converges uniformly for $x\in [\varepsilon,1]$.
\par
Putting $s_n (x) := \sum_{j=1}^N \lambda_j^{-2} \phi_j^2 (x)$ ($N\in{\mathbb N}$, $0\leq x\leq 1$), we
observe that $s_1 (x)\leq s_2 (x) \leq s_3 (x) \leq\ \ldots\ $ ($0\leq x\leq 1$), that
the functions $s_1,s_2,s_3,\ldots\ $ are continuous on $[0,1]$ (by virtue of Theorem~2.10),
and that, by \eqref{K2BilinearFormula}, it follows that, for $0\leq x\leq 1$, one has
$\lim_{N\rightarrow\infty} s_N (x) = K_2 (x,x)$, which, by Theorem~2.7, is a continuous
function of $x$ on the interval $[\varepsilon , 1]$.
By Dini's theorem \cite[Theorem~7.13]{Rudin 1976}, it follows from what we have just noted
that the sequence $s_1 (x), s_2 (x), s_3(x) , \ldots\ $ is uniformly convergent on
the compact interval $[\varepsilon , 1]\subset [0,1]$: this means, of course,
that the same is true of the series
$\lambda_1^{-2} \phi_1^2 (x) + \lambda_2^{-2} \phi_2^2 (x) + \lambda_3^{-2} \phi_3^2 (x) + \ \ldots\ $.
\end{proof}
\begin{remarks}
The above proof is, in essence, an adaptation of the proof
of `Mercer's theorem' that appears in \cite[Section~3.12]{Tricomi 1957}.
\end{remarks}
\begin{corollary}
One has
\begin{equation}\label{K2diagonal}
\sum_{j=1}^{\infty}
\frac{\phi_j^2 (x)}{\lambda_j^2} =
K_2 (x,x) =
\int_0^1 K^2 (x,z) dz \leq \frac{1}{4}
\quad\text{($0\leq x\leq 1$),}
\end{equation}
and (in consequence of this) one has also
\begin{equation}\label{PhiBoundedUniformly}
\left| \phi_j (x)\right| \leq
{\textstyle\frac{1}{2}} \left| \lambda_j\right|
\quad\text{($0\leq x\leq 1$, $j\in{\mathbb N}$),}
\end{equation}
\begin{equation}\label{K(x,y)_in_mean}
\lim_{H\rightarrow\infty} \int_0^1 \left( K(x,y) -
\sum_{h=1}^H \frac{\phi_h (x) \phi_h(y)}{\lambda_h} \right)^2 dy = 0
\quad\text{($0\leq x\leq 1$)}
\end{equation}
and
\begin{equation}\label{K2Trace}
\sum_{h=1}^{\infty} \frac{1}{\lambda_h^2} = \| K\|_{\rm HS}^2\;.
\end{equation}
\end{corollary}
\begin{proof}
The result \eqref{K2diagonal} follows immediately from Theorem~2.11,
\eqref{K2symmetrically} and \eqref{HalfHilbertSchmidt}, for $p=2$.
By \eqref{K2diagonal}, we have $\frac{1}{4}\geq \lambda_j^{-2} \phi_j^2 (x)$,
for $j\in{\mathbb N}$ and $0\leq x\leq 1$.
From this, we immediately obtain the bounds \eqref{PhiBoundedUniformly}.
\par
Suppose now that $x\in [0,1]$.
By expanding the integrand in \eqref{K(x,y)_in_mean} and integrating term by term, we
find (using \eqref{DefEigenfunction}, \eqref{K2symmetrically}
and the orthonormality of $\phi_1,\phi_2,\phi_3,\ldots $ )
that the limit occurring in \eqref{K(x,y)_in_mean}
is $\lim_{H\rightarrow\infty} \bigl( K_2(x,x) - \sum_{h=1}^H \lambda_h^{-2} \phi_h^2 (x)\bigr)$,
which (by \eqref{K2diagonal}) is equal to $0$. This proves \eqref{K(x,y)_in_mean}.
\par
By the `Monotone Convergence Theorem' \cite[Theorem~5.32]{Wheeden and Zygmund 2015},
it follows from the result \eqref{K2diagonal} that one has
\begin{equation*}
\lim_{H\rightarrow\infty} \int_0^1 \biggl(\sum_{h=1}^H \frac{\phi_h^2 (x)}{\lambda_h^2}\biggr) dx
= \int_0^1 K_2 (x,x) dx .
\end{equation*}
By this, combined with both the
fact that $\left\| \phi_h \right\| = 1$ ($h\in{\mathbb N}$)
and the definitions in \eqref{K2symmetrically} and \eqref{HSnorm},
we obtain what is stated in \eqref{K2Trace}.
\end{proof}
\begin{remarks}
{\it 1)} By \eqref{K2Trace}, \eqref{HSnorm} and \eqref{EigenvalueOrder1}, we have:
\begin{equation}\label{SpectrumBound}
\left|\lambda_1\right| = \min\left\{ \left| \lambda_j\right| : j\in{\mathbb N}\right\}
> \| K\|_{\rm HS}^{-1} > 2\;.
\end{equation}
\item{\it 2)} For an alternative proof of \eqref{PhiBoundedUniformly}, simply bound the integral in \eqref{DefEigenfunction} using the Cauchy-Schwarz inequality, \eqref{HalfHilbertSchmidt}
and the relation $\| \phi_j\| = 1$.
\item{\it 3)} Since all the summands occurring in the series
$\sum_{j=1}^{\infty} \lambda_j^{-2} \phi_j^2 (x)$ are non-negative real numbers,
it can be deduced from Theorem~2.11 that, for any constant $\varepsilon\in (0,1)$, the sequence $\bigl( \lambda_j^{-1} \phi_j (x)\bigr)$ converges uniformly (to the limit $0$) for all $x\in [\varepsilon , 1]$.
That is, for each $\varepsilon\in (0,1)$, one has
\begin{equation*}
\left| \phi_j (x)\right| \leq
\frac{\left| \lambda_j\right|}{d_j (\varepsilon)}
\quad\text{($\varepsilon\leq x\leq 1$ and $j\in{\mathbb N}$),}
\end{equation*}
where $( d_j (\varepsilon) )$ is some
unbounded monotonic increasing sequence of positive numbers that depends only on $\varepsilon$
(by \eqref{PhiBoundedUniformly}, one can assume that $d_1(\varepsilon)\geq 2$).
\item{\it 4)} The series
$\lambda_1^{-2} \phi_1^2 (x) + \lambda_2^{-2} \phi_2^2 (x) + \lambda_3^{-2} \phi_3^2 (x) + \ \ldots\ $
is not uniformly convergent on $[0,1]$.
If it were, then the function $K_2$ would be continuous on
$[0,1]\times [0,1]$ (this would follow by virtue of Theorem~2.10, Theorem~2.11 and
the inequalities that are obtained in the penultimate paragraph of the proof of Theorem~2.11).
By Theorem~2.9, however, we know that $K_2$ is certainly not continuous at
the point $(0,0)\in [0,1]\times [0,1]$.
\end{remarks}
\section{Lipschitz conditions}
\bigskip
\begin{lemma}
When $0<x\leq 1$, one has:
\begin{equation*}
\int_x^1
\Biggl| \sum_{m > \frac{1}{y}}
\frac{\widetilde B_2 \left( \frac{my}{x}\right)}{m^2}
\Biggr|
\frac{dy}{y^2} < \frac{2}{3} .
\end{equation*}
\end{lemma}
\begin{proof}
Let $0<x<1$, and define $X := x^{-1}$, so that $X>1$.
Then, by considering the effect of the substitution $y=Y^{-1}$,
we find that the lemma will follow if it can be shown that one has
\begin{equation}\label{proofA1}
b(X) :=
\int_1^X
\Biggl| \sum_{m > Y}
\frac{\widetilde B_2 \left( \frac{mX}{Y}\right)}{m^2}
\Biggr| dY < \frac{2}{3} .
\end{equation}
\par
Supposing now that $n$ is a positive integer satisfying $n<X$, we
put $\nu_n := \min\{ n+1 , X\}$. By applying the Levi theorem for series
\cite[Theorem~10.26]{Apostol 1974}, one can establish that the function
$Y\mapsto \sum_{m > Y} m^{-2} \widetilde B_2 \left( m X Y^{-1}\right)$ is Lebesgue integrable
on the interval $[n,\nu_n)$. We therefore may define
\begin{equation}\label{proofA2}
b_n(X) :=
\int_n^{\nu_n}
\Biggl| \sum_{m > Y}
\frac{\widetilde B_2 \left( \frac{mX}{Y}\right)}{m^2}
\Biggr| dY\;.
\end{equation}
\par
Now, for $Y\in [n,\nu_n)$, it follows by Definitions~2.2 and \cite[Equations~24.8.1]{Olver et al. 2010}
that one has
\begin{equation}\label{proofA3}
\Biggl| \sum_{m > Y}
\frac{\widetilde B_2 \left( \frac{mX}{Y}\right)}{m^2}
\Biggr| =
\Biggl| \sum_{m > Y}
\sum_{h=1}^{\infty} \frac{\cos\left( 2\pi h m X Y^{-1}\right)}{\pi^2 h^2 m^2} \Biggr|
\leq \sum_{h=1}^{\infty} g_h (Y) ,
\end{equation}
where
\begin{equation*}
g_h (Y) := \Biggl|
\sum_{m > Y} \frac{\cos\left( 2\pi h m X Y^{-1}\right)}{ \pi^2 h^2 m^2} \Biggr| =
\Biggl| \sum_{m=n+1}^{\infty}\frac{\cos\left( 2\pi h m X Y^{-1}\right)}{\pi^2 h^2 m^2} \Biggr|
\end{equation*}
(the inequality in \eqref{proofA3} being justified by the fact that the double series
occurring there is absolutely convergent --- so that one may, in particular,
change the original order of summation by summing firstly over $m$).
By \cite[Theorems~10.26 and~10.16]{Apostol 1974}, each member of the sequence $g_1(Y),g_2(Y),g_3(Y),\ldots $
is a function that is Lebesgue integrable on the interval $[n,\nu_n)$.
We have, moreover,
\begin{equation*}
g_h(Y)\geq 0\quad\text{($h\in{\mathbb N}$ and $n\leq Y<\nu_n$)}
\end{equation*}
and
\begin{equation*}
\sum_{h=1}^{\infty} g_h(Y) \leq
\sum_{h=1}^{\infty} \sum_{m=n+1}^{\infty} \frac{1}{\pi^2 h^2 m^2}
<\left( \sum_{k=1}^{\infty} \frac{1}{\pi k^2}\right)^2 < \infty ,
\end{equation*}
and so (bearing in mind also that $[n,\nu_n)$ is a bounded interval) we are
able to conclude that it follows by Lebesgue's dominated convergence theorem
\cite[Theorem~10.28]{Apostol 1974} that the function
$Y\mapsto \sum_{h=1}^{\infty} g_h (Y)$ is Lebesgue integrable on $[n,\nu_n)$,
and that one has
\begin{equation*}
\int_n^{\nu_n} \left( \sum_{h=1}^{\infty} g_h (Y)\right) dY =
\sum_{h=1}^{\infty} \int_n^{\nu_n} g_h (Y) dY .
\end{equation*}
By this, combined with \eqref{proofA2} and \eqref{proofA3}, it follows that one has
\begin{equation*}
b_n(X) \leq \sum_{h=1}^{\infty} \int_n^{\nu_n} \Biggl|
\sum_{m > Y} \frac{\cos\left( 2\pi h m X Y^{-1}\right)}{ \pi^2 h^2 m^2} \Biggr| dY .
\end{equation*}
By summing each side of this last inequality over the finitely many
choices of the integer $n$ that satisfy the condition $n<X$ we find
(upon recalling the definitions \eqref{proofA1} and \eqref{proofA2}) that
\begin{align}\label{proofA4}
b(X) = \sum_{1\leq n<X} b_n(X) &\leq
\sum_{h=1}^{\infty} \sum_{1\leq n<X}\int_n^{\nu_n} \Biggl|
\sum_{m > Y} \frac{\cos\left( 2\pi h m X Y^{-1}\right)}{ \pi^2 h^2 m^2} \Biggr| dY \nonumber\\
&=\sum_{h=1}^{\infty} \int_1^X \Biggl|
\sum_{m > Y} \frac{\cos\left( 2\pi h m X Y^{-1}\right)}{\pi^2 h^2 m^2} \Biggr| dY \nonumber\\
&=
\sum_{h=1}^{\infty} \frac{J(h,X)}{\pi^2 h^2} ,
\end{align}
where, for $h\in{\mathbb N}$, we have:
\begin{equation*}
J(h,X) := \int_1^X \Biggl|
\sum_{m > Y} \frac{\cos\left( 2\pi m h X Y^{-1}\right)}{m^2} \Biggr| dY .
\end{equation*}
\par
Our next objective is an upper bound for the integrand just seen in our
definition of $J(h,X)$. Our proof of this bound utilises a method
well-known to analytic number theorists.
Let $Y>1$ and $t\in{\mathbb R}\backslash{\mathbb Z}$.
We have
\begin{equation}\label{proofA5}
\sum_{m>Y} \frac{\cos(2 \pi m t)}{m^2}
=\int_Y^{\infty} u^{-2} dC(u) ,
\end{equation}
where, for $u\geq 1$, one has
\begin{equation*}
C(u) = \sum_{0<m\leq u} \cos(2 \pi m t)
= {\rm Re}\Biggl( \sum_{m=1}^{\lfloor u\rfloor} e^{2 \pi i m t}\Biggr)
= {\rm Re}\left( \frac{e^{2 \pi i \lfloor u\rfloor t} - 1}{1 - e^{- 2 \pi i t}}\right) ,
\end{equation*}
and so
\begin{equation*}
\left| C(u)\right| \leq \frac{2}{\left| e^{\pi i t} - e^{-\pi i t}\right|}
=\frac{1}{\left| \sin(\pi t)\right|} = \frac{1}{\sin\left( \pi \| t\|\right)}
\leq \frac{1}{2 \| t\|} ,
\end{equation*}
where $\| t\| =\min\{ |t-j| : j\in{\mathbb Z}\}$.
Using integration by parts, we deduce from \eqref{proofA5} and the above upper
bound for $|C(u)|$ that one has
\begin{align*}
\left| \sum_{m>Y} \frac{\cos(2 \pi m t)}{m^2} \right|
&=\left| 2\int_Y^{\infty} u^{-3} C(u) du - Y^{-2} C(Y)\right| \\
&\leq \frac{1}{\| t\|} \left( \int_Y^{\infty} u^{-3} du + {\textstyle\frac{1}{2}} Y^{-2}\right)
= \frac{1}{\| t\| Y^2} .
\end{align*}
It is also (trivially) the case that
\begin{equation*}
\left| \sum_{m>Y} \frac{\cos(2 \pi m t)}{m^2} \right|
\leq \sum_{m>Y} \frac{1}{m^2} < \frac{1}{Y^2} + \int_Y^{\infty} \frac{du}{u^2} < \frac{2}{Y} .
\end{equation*}
The latter bound remains valid for integer values of $t$,
and so (by combining the two bounds just noted) we find that
one has:
\begin{equation*}
\left| \sum_{m>Y} \frac{\cos(2 \pi m t)}{m^2} \right|
\leq \frac{2}{Y \max\left\{ 1 , 2 Y \| t\|\right\}}
\quad\text{($Y>1$, $t\in{\mathbb R}$).}
\end{equation*}
\par
Given our definition of $J(h,X)$, it follows by the upper
bounds just obtained that, for $h\in{\mathbb N}$, one has:
\begin{align}\label{proofA6}
J(h,X) &\leq
2 \int_1^X \left( \max\left\{ 1 , 2 Y \left\| \frac{h X}{Y}\right\|\right\}\right)^{-1} \frac{dY}{Y}\nonumber\\
&= - 2 \int_{hX}^h \left( \max\left\{ 1 , 2 h X t^{-1} \| t \|\right\}\right)^{-1} \frac{dt}{t}\nonumber\\
&= \frac{1}{hX} \int_h^{hX} \frac{dt}{\max\left\{ \frac{t}{2 h X} , \| t \|\right\}} .
\end{align}
For each positive integer $k < hX$, we have:
\begin{align*}
\int_k^{k+1} \frac{dt}{\max\left\{ \frac{t}{2 h X} , \| t \|\right\}}
&\leq \int_{-\frac{1}{2}}^{\frac{1}{2}} \frac{dt}{\max\left\{ \frac{k}{2 h X} , \| t \|\right\}} \\
&= 2\int_0^{\frac{k}{2hX}} \left( \frac{2hX}{k}\right) dt
+ 2\int_{\frac{k}{2hX}}^{\frac{1}{2}} \frac{dt}{t} \\
&= 2 + 2\log\left( \frac{hX}{k}\right) .
\end{align*}
It follows by this and \eqref{proofA6} that, for each $h\in{\mathbb N}$, one has
\begin{align*}
J(h,X) &\leq \frac{2}{hX} \sum_{1\leq k < hX} \left( 1 + \log\left( \frac{hX}{k}\right)\right) \\
&< \frac{2}{hX} \left( hX + \log\left( \frac{hX}{1} \right) + \int_1^{hX} \log\left( \frac{hX}{\kappa}\right) d\kappa \right) \\
&=\frac{2}{hX}\left( hX + hX\log(hX) + (-1) - \left( hX\log(hX) - hX\right)\right) \\
&=\frac{2}{hX}\left( 2hX - 1\right) ,
\end{align*}
so that $J(h,X) < 4$. By this, \eqref{proofA4} and
Euler's famous evaluation of the sum $\sum_{h=1}^{\infty} h^{-2}$, we obtain the inequality in \eqref{proofA1}.
This completes our proof in respect of cases where $0<x<1$. The remaining case
($x=1$) is trivial.
\end{proof}
\begin{theorem}
For all $j\in{\mathbb N}$, one has
\begin{equation*}
\frac{\left| \phi_j (x)\right|}{x} \leq C_0 \left| \lambda_j \right|^3
\quad\text{($0 < x\leq 1$),}
\end{equation*}
where
\begin{equation}\label{DefC0}
C_0 := \frac{1}{3} + \frac{1}{72 \sqrt{3} e} .
\end{equation}
\end{theorem}
\begin{proof}
Let $j\in{\mathbb N}$ and $0<x\leq 1$.
By \eqref{K2Eigenfunction} and \eqref{PhiBoundedUniformly}, we obtain the bound
\begin{equation}\label{proofB1}
\frac{\left| \phi_j (x)\right|}{x} \leq
{\textstyle\frac{1}{2}} \left| \lambda_j\right|^3
\int_0^1 \frac{\left| K_2 (x,z)\right|}{x} dz .
\end{equation}
By \eqref{K2OverxBound}, the trivial bound $|K_2 (x,z)| < \frac{1}{4} $ ($0\leq z\leq 1$) and
Lemma~3.1, we find that
\begin{align*}
\int_0^1 \frac{\left| K_2 (x,z)\right|}{x} dz
&< \frac{1}{4x} \int_0^x dz \\
&\phantom{{<}} + \int_x^1 \Biggl(
\frac{1}{12} +
\frac{x}{\left( 36 \sqrt{3}\right) z} +
\frac{1}{2 z^2} \Biggl| \sum_{m > \frac{1}{z}}
\frac{\widetilde B_2 \left( \frac{mz}{x}\right)}{m^2}
\Biggr|
\Biggr) dz \\
&< \frac{1}{4} + \frac{1}{12} + \frac{x\log\left( x^{-1}\right)}{36\sqrt{3}} + \frac{1}{3} .
\end{align*}
Here $x\log\left( x^{-1}\right)\leq e^{-1}$ (given that $0<x\leq 1$), and
so the theorem follows directly from the last bound above and \eqref{proofB1}.
\end{proof}
\medskip
\begin{remarks}
Let $j\in{\mathbb N}$. In view of our having $\phi_j (0)=0\,$
(by \eqref{DefEigenfunction} and \eqref{DefK}), Theorem~3.2
shows that the eigenfunction $\phi_j(x)$ satisfies a
right-handed Lipschitz condition of order $1$ at the point $x=0$.
That is, one has
\begin{equation*}
\left| \phi_j (x) - \phi_j (0)\right| \leq M^*_j x\qquad\text{($0\leq x\leq 1$)} ,
\end{equation*}
with $M^*_j := C_0 |\lambda_j|^3$ independent of $x$.
\end{remarks}
\medskip
\begin{lemma}
When $0<a,b\leq 1$, one has:
\begin{equation}\label{Delta1Bound}
0\leq \Delta_1(a,b) :=
\int_0^1 \left| K(a,z) - K(b,z)\right| z dz \leq
4\left| \frac{1}{b} - \frac{1}{a}\right|\;.
\end{equation}
\end{lemma}
\begin{proof}
Since the upper bound in
\eqref{Delta1Bound} is invariant under the permutation $(a,b)\mapsto (b,a)$,
we may suppose that $0<b\leq a\leq 1$.
Given \eqref{DefK} and Definitions~2.5, it is trivially the case that we have
$0\leq \Delta_1 (a,b) \leq \int_0^1 z dz = \frac{1}{2}$.
Thus the bound
\eqref{Delta1Bound} certainly holds if $\frac{1}{b} - \frac{1}{a}\geq \frac{1}{2}$.
We may therefore assume henceforth that
\begin{equation}\label{deltaSmall}
0 \leq \delta := \frac{1}{b} - \frac{1}{a} < \frac{1}{2} .
\end{equation}
\par
By Definitions~2.5, \eqref{DefK} and \eqref{deltaSmall},
we find (using the triangle inequality) that
\begin{align*}
\Delta_1 (a,b) &= \int_0^1 \left| \left\{ \frac{1}{bz}\right\}
- \left\{ \frac{1}{az}\right\} \right| z dz \\
&= \int_0^1 \left|
\left( \frac{1}{bz} - \frac{1}{az}\right) +
\left(\left\lfloor \frac{1}{az}\right\rfloor
- \left\lfloor \frac{1}{bz}\right\rfloor\right) \right| z dz \\
&\leq
\int_0^1 \left( \left(\frac{1}{bz} - \frac{1}{az} \right) +
\sum_{\frac{1}{az} < n \leq \frac{1}{bz}} 1\right) z dz \\
&= \delta + \sum_{n > \frac{1}{a}}\int_{\frac{1}{an}}^{\min\{ \frac{1}{bn} , 1\}} z dz \\
&= \delta +
{\textstyle\frac{1}{2}} \sum_{\frac{1}{a} < n \leq \frac{1}{b}} \left( 1 - \frac{1}{a^2 n^2} \right)
+ {\textstyle\frac{1}{2}}\sum_{n > \frac{1}{b}} \left( \frac{1}{b^2 n^2} - \frac{1}{a^2 n^2} \right) \\
&\leq \delta + {\textstyle\frac{1}{2}} \left( 1 - \frac{b^2}{a^2} \right) \sum_{\frac{1}{a} < n \leq \frac{1}{b}} 1 +
{\textstyle\frac{1}{2}} \left( \frac{1}{b^2} - \frac{1}{a^2}\right) \sum_{n > \frac{1}{b}} \frac{1}{n^2} .
\end{align*}
This, together with \eqref{deltaSmall} (and our assumption that $0<b<a\leq 1$), yields:
\begin{align*}
\Delta_1 (a,b) &\leq \delta + {\textstyle\frac{1}{2}}
\left( 1 + \frac{b}{a} \right)\left( 1 - \frac{b}{a} \right) \cdot (1) +
{\textstyle\frac{1}{2}} \left( \frac{1}{b} + \frac{1}{a}\right) \delta \cdot \left( b^2 + b\right) \\
& \leq \delta + {\textstyle\frac{1}{2}} \left( 2 \right) b \delta +
{\textstyle\frac{1}{2}} \left( \frac{2}{b} \right) \delta \cdot \left( 2 b\right)
\leq 4\delta ,
\end{align*}
which is \eqref{Delta1Bound}.
\end{proof}
\begin{lemma}
Let $C_0$ be the constant defined in \eqref{DefC0}. Let $j\in{\mathbb N}$.
Then
\begin{equation*}
\left| \phi_j(x) - \phi_j(y)\right| \leq
4 C_0 \lambda_j^4 \cdot \left| \frac{1}{x} - \frac{1}{y}\right|
\quad\text{($0<x,y\leq 1$).}
\end{equation*}
\end{lemma}
\begin{proof}
Let $0<x,y\leq 1$. By \eqref{DefEigenfunction} and Theorem~3.2, we find that
\begin{align*}
\left| \phi_j(x) - \phi_j(y)\right| &\leq
\left| \lambda_j \int_0^1 \left( K(x,z) - K(y,z)\right)
z \cdot \left(\frac{\phi_j (z)}{z}\right) dz\right| \\
&\leq C_0 \lambda_j^4 \Delta_1 (x,y) ,
\end{align*}
where $\Delta_1 (x,y)$ is as described in Definitions~2.5.
By this, together with the upper bound \eqref{Delta1Bound} for $\Delta_1 (x,y)$,
the lemma follows.
\end{proof}
\begin{remarks}{\it 1)} Let $j\in{\mathbb N}$.
Then, by Lemma~3.4, the eigenfunction $\phi_j(x)$ satisfies
a uniform Lipschitz condition of order $1$ on each closed interval $[a,b]\subset (0,1]$.
In particular, for all $\varepsilon > 0$,
there is some $M_{j,\varepsilon} <\infty$ such that
\begin{equation*}
\left| \phi_j(x) - \phi_j(y)\right|\leq M_{j,\varepsilon} |x-y|
\qquad\text{for all $x,y\in [\varepsilon , 1]$}
\end{equation*}
(Lemma~3.4 implies that this holds with $M_{j,\varepsilon} := 4 C_0 \lambda_j^4 \varepsilon^{-2}$).
It follows that the function $\phi_j(x)$ is absolutely continuous
on each closed interval $[a,b]\subset (0,1]$, and so is of
bounded variation on any such interval (this last fact may also be deduced
directly from Lemma~3.4).
\item{\it 2)} We will later improve upon Lemma~3.4: see Corollary~4.10.
\end{remarks}
\section{The first derivative}
Let $j\in{\mathbb N}$, and put $\lambda = \lambda_j$ and $\phi (x) = \phi_j (x)$ ($0\leq x\leq 1$).
We recall (see our Remarks following Lemma~3.4)
that $\phi(x)$ is of bounded variation
(and is, moreover, absolutely continuous) on any closed interval $[a,b]\subset (0,1]$.
\par
It is well-known (see \cite[Sections~11.3--11.42]{Titchmarsh 1939}, for example)
that any function that is of bounded variation
on some interval $X$ must be differentiable {\it almost everywhere}
(with respect to the Lebesgue measure) in that same interval.
If the function in question is absolutely continuous on $X$, and if $X$ is compact,
then the derivative of the function is Lebesgue integrable on $X$ (even if
the set of points at which that derivative is defined is a proper subset of $X$)
and the function is (on $X$) a Lebesgue indefinite integral of its derivative:
for proof of this see \cite[Sections~11.4, 11.54, 11.7 and~11.71]{Titchmarsh 1939}.
By applying these observations to our eigenfunction $\phi(x)$,
we deduce from what was noted in the preceding paragraph
that $\phi$ is differentiable almost everywhere in $[0,1] =
\{ 0\}\cup\left( \cup_{n\in{\mathbb N}} \left[ n^{-1} , 1\right]\right)$,
that the derivative $\phi'(x)$ is Lebesgue integrable
on any closed interval $[a,b]\subset (0,1]$,
and that
\begin{equation}\label{IndefiniteIntegral}
\phi(1) - \phi(x)=\int_x^1 \phi'(y) \,dy\qquad\hbox{($0<x\leq 1$).}
\end{equation}
By this and Theorem~2.10, one has:
\begin{equation}\label{phi(1)asIntegral}
\lim_{x\rightarrow 0+}\int_x^1 \phi'(y) \,dy = \phi(1) - \phi(0) = \phi(1)\;.
\end{equation}
For more specific information about $\phi'(x)$ we need the following result.
\begin{theorem}
The function $x\mapsto x^{-1} \phi(x)$ is Lebesgue integrable on $[0,1]$,
and so
\begin{equation}\label{DefPhi1}
{\mathbb R}\ni\int_0^1 \frac{\phi(y) dy}{y} = \Phi_1 \quad\text{(say).}
\end{equation}
When $0 < x < 1$ and $\frac{1}{x}$ is not an integer, one has
\begin{equation}\label{DerivativeFormula1}
\lambda^{-1} x^2 \phi'(x) = \Phi_1 - \sum_{m>\frac{1}{x}} \frac{\phi\left(\frac{1}{mx}\right)}{m} \in{\mathbb R}.
\end{equation}
For $n\in{\mathbb N}$, the derivative $\phi'(x)$ is a continuous function on
the interval $\left( (n+1)^{-1} , n^{-1}\right)$, and one has both
\begin{equation}\label{LefthandDphi}
\lim_{x\rightarrow \frac{1}{n} -} \phi'(x)
= \lambda n^2 \left( \Phi_1 - \sum_{m=n+1}^{\infty} \frac{\phi\left(\frac{n}{m}\right)}{m}\right)\in{\mathbb R}
\end{equation}
and
\begin{equation}\label{RighthandDphi}
\lim_{x\rightarrow \frac{1}{n+1} +} \phi'(x)
= \lambda (n+1)^2 \left( \Phi_1 - \sum_{m=n+1}^{\infty} \frac{\phi\left(\frac{n+1}{m}\right)}{m}\right)\in{\mathbb R} .
\end{equation}
\end{theorem}
\begin{proof}
For $n\in{\mathbb N}$ and $0\leq x\leq 1$ we put
\begin{equation*}
f_n(x) = \begin{cases}
x^{-1} \phi(x) & \text{if $x\geq n^{-1}$} , \\
0 & \text{otherwise} .
\end{cases}
\end{equation*}
Since $\phi$ is a measurable function on $[0,1]$,
it follows that $f_1,f_2,f_3,\ldots $ is a sequence of measurable functions on $[0,1]$.
Theorem~3.2 implies that this sequence of functions is uniformly bounded.
Given these facts, and given that the equality
$\lim_{n\rightarrow\infty} f_n(x) = x^{-1} \phi(x)$
holds almost everywhere on $[0,1]$ (everywhere except at $x=0$, in fact),
it therefore follows by Lebesgue's theorem of bounded convergence
\cite[Section~10.5]{Titchmarsh 1939} that one has what is
stated in the first part of the theorem (i.e. up to and including \eqref{DefPhi1}).
\par
To complete this proof we shall show that \eqref{DerivativeFormula1} holds whenever
$x$ satisfies the attached conditions. Those conditions imply that, for
some positive integer $n$, one has
\begin{equation}\label{ProofC1}
\frac{1}{n+1} < x < \frac{1}{n} .
\end{equation}
Thus it will be enough to show that, when $n\in{\mathbb N}$,
one has \eqref{DerivativeFormula1} for all $x$ satisfying \eqref{ProofC1}.
\par
Let $n\in{\mathbb N}$. Then it follows from \eqref{DefEigenfunction} and \eqref{DefK} that,
for $x$ satisfying \eqref{ProofC1} and $H\in{\mathbb N}$, one has:
\begin{align*}
\lambda^{-1}\phi(x)
&= \int_0^{\frac{1}{(n+H+1)x}} K(x,y) \phi(y) dy +
\int_{\frac{1}{(n+1)x}}^1 \left( \frac{1}{2} + n - \frac{1}{xy} \right) \phi(y) dy \\
&\phantom{{=}} + \sum_{h=1}^H \int_{\frac{1}{(n+h+1)x}}^{\frac{1}{(n+h)x}}
\left( \frac{1}{2} + n + h - \frac{1}{xy} \right) \phi(y) dy\\
&= r_H(x) + u_0(x) + \sum_{h=1}^H u_h(x) \quad\text{(say).}
\end{align*}
By \eqref{DefK} and the Theorem~3.2, the above term $r_H(x)$ satisfies
\begin{equation*}
\left| r_H(x)\right| \leq \int_0^{\frac{1}{(n+H+1)x}} \left| K(x,y) \phi(y)\right| dy
\leq {\textstyle\frac{1}{2}} C_0 |\lambda|^3 \int_0^{\frac{1}{(n+H+1)x}} y dy
< \frac{C_0 |\lambda|^3}{x^2 H^2} .
\end{equation*}
Thus $r_H(x)\rightarrow 0$ as $H\rightarrow\infty$, so that we have
\begin{equation}\label{ProofC2}
\lambda^{-1}\phi(x) =
u_0(x) + \sum_{h=1}^{\infty} u_h(x) ,
\quad\text{when $x$ satisfies \eqref{ProofC1}.}
\end{equation}
\par
We now contemplate term-by-term differentiation of the right-hand side
of Equation~\eqref{ProofC2}, on the assumption that $x$ satisfies \eqref{ProofC1}.
But first let us define functions $v_0(x),v_1(x),v_2(x),\ldots $
on the closed interval $\left[ (n+1)^{-1} , n^{-1}\right]$, by specifying that
\begin{equation*}
x^2 v_h(x) =
\begin{cases}
{\displaystyle\int_{\frac{1}{(n+1)x}}^1 \frac{\phi(y) dy}{y}
- \frac{\phi\left(\frac{1}{(n+1)x}\right)}{2(n+1)}}
& \text{if $h=0$} , \\
{\displaystyle\int_{\frac{1}{(n+h+1)x}}^{\frac{1}{(n+h)x}} \frac{\phi(y) dy}{y}
- \frac{\phi\left(\frac{1}{(n+h+1)x}\right)}{2(n+h+1)} -
\frac{\phi\left(\frac{1}{(n+h)x}\right)}{2(n+h)}}
& \text{if $h\in{\mathbb N}$}
\end{cases}
\end{equation*}
(note the function $x\mapsto x^{-1} \phi(x)$ is integrable on $[0,1]$,
and so is also integrable on all of the ranges of integration occurring here,
since these ranges are subintervals of $[0,1]$ whenever $x\geq (n+1)^{-1}$).
Using the part of the theorem that was already proved, we deduce that, when
$H\in{\mathbb N}$ and $(n+1)^{-1}\leq x\leq n^{-1}$, one has:
\begin{equation}\label{ProofC3}
\int_0^{\frac{1}{(n+H+1)x}} \frac{\phi(y) dy}{y} +
\frac{\phi\left(\frac{1}{(n+H+1)x}\right)}{2(n+H+1)} +
x^2\sum_{h=0}^H v_h (x) =
\Phi_1 - \sum_{h=1}^H \frac{\phi\left(\frac{1}{(n+h)x}\right)}{n+h} .
\end{equation}
\par
Since $\phi(x)$ is continuous on $(0,1]$, we find that the function
$v_0(x)$, and each function in the sequence $v_1(x),v_2(x),v_3(x)\ldots $,
is continuous on the closed interval $\left[ (n+1)^{-1} , n^{-1}\right]$.
By Theorem~3.2, we find also that, when $h\in{\mathbb N}$ and
$(n+1)^{-1}\leq x\leq n^{-1}$, one has
\begin{align*}
\left| v_h (x)\right|
&\leq \frac{1}{x^2}\int_{\frac{1}{(n+h+1)x}}^{\frac{1}{(n+h)x}} C_0 |\lambda|^3 dy
+ \frac{C_0 |\lambda|^3}{2(n+h+1)^2 x^3} + \frac{C_0 |\lambda|^3}{2(n+h)^2 x^3} \\
&= {\textstyle\frac{1}{2}} C_0 \left( \frac{|\lambda|}{x}\right)^3 \left( \frac{1}{n+h+1} + \frac{1}{n+h}\right)^2
< \frac{2 C_0 |\lambda|^3 (n+1)^3}{h^2} .
\end{align*}
Thus application of the Weierstrass $M$-test \cite[Theorem~9.6]{Apostol 1974}
shows that the series $v_1(x) + v_2(x) + v_3(x) + \ldots $ is uniformly convergent
on the interval $\left[ (n+1)^{-1} , n^{-1}\right]$. Therefore,
given that each of $v_1(x),v_2(x),v_3(x)\ldots $ (and $v_0(x)$ also)
is continuous on $\left[ (n+1)^{-1} , n^{-1}\right]$, it follows that we have
\begin{equation}\label{ProofC4}
\sum_{h=0}^{\infty} v_h (x) = v_0 (x) + \sum_{h=1}^{\infty} v_h (x) = g(x)
\quad\text{for all $x\in\left[\frac{1}{n+1} , \frac{1}{n}\right]$,}
\end{equation}
where $g(x)$ is some continuous real-valued function on $\left[ (n+1)^{-1} , n^{-1}\right]$.
\par
We observe now that, by Theorem~3.2, the sum of first two terms on
the left-hand side of Equation~\eqref{ProofC3} is a number $\rho_H(x)$
that satisfies
\begin{equation*}
\left|\rho_H (x)\right| \\
\leq \left( \frac{C_0 |\lambda|^3}{(n+H+1)x}\right)\left( 1 + \frac{1}{2(n+H+1)}\right) .
\end{equation*}
In particular, for each fixed $x\in\left[ (n+1)^{-1} , n^{-1}\right]$,
we have $\rho_H (x) \rightarrow 0$ as $H\rightarrow\infty$.
This, together with \eqref{ProofC3} and \eqref{ProofC4}, enables
us to deduce that, for $(n+1)^{-1}\leq x\leq n^{-1}$, one has
\begin{equation}\label{ProofC5}
\sum_{m=n+1}^{\infty} \frac{\phi\left(\frac{1}{m x}\right)}{m} =
\Phi_1 - x^2\sum_{h=0}^{\infty} v_h (x) = \Phi_1 - x^2 g(x)\in{\mathbb R} .
\end{equation}
\par
Assuming that \eqref{ProofC1} holds, it follows
by \eqref{DefK}, Theorem~2.10 and elementary calculus that one has
\begin{align}\label{ProofC6}
u_0'(x)
&= \frac{d}{dx}\int_{\frac{1}{(n+1)x}}^1 \left( \frac{1}{2} +
n - \frac{1}{xy} \right) \phi(y) dy \nonumber\\
&= \int_{\frac{1}{(n+1)x}}^1
\frac{\partial}{\partial z}\left( \left( \frac{1}{2} + n - \frac{1}{zy} \right) \phi(y)\right)
\biggr|_{z=x} dy \nonumber\\
&\phantom{{=}} - \left( \frac{1}{2} + n - \frac{1}{xy} \right) \phi(y)\biggr|_{y=\frac{1}{(n+1)x}}
\cdot\frac{d}{dx}\left( \frac{1}{(n+1)x}\right) \nonumber\\
&=\frac{1}{x^2} \int_{\frac{1}{(n+1)x}}^1 \frac{\phi(y) dy}{y}
+ {\textstyle\frac{1}{2}} \phi\left(\frac{1}{(n+1)x}\right) \cdot \frac{(-1)}{(n+1) x^2} \nonumber\\
&= v_0(x) .
\end{align}
Similarly, for $h\in{\mathbb N}$, we find (subject to \eqref{ProofC1} holding) that
\begin{align}\label{ProofC7}
u_h'(x) &= \frac{d}{dx} \int_{\frac{1}{(n+h+1)x}}^{\frac{1}{(n+h)x}}
\left( \frac{1}{2} + n + h - \frac{1}{xy} \right) \phi(y) dy \nonumber\\
&= \frac{1}{x^2} \int_{\frac{1}{(n+h+1)x}}^{\frac{1}{(n+h)x}} \frac{\phi(y) dy}{y}
+ {\textstyle\frac{1}{2}} \phi\left(\frac{1}{(n+h+1)x}\right) \cdot \frac{(-1)}{(n+h+1) x^2} \nonumber\\
&\phantom{{=}} + {\textstyle\frac{1}{2}} \phi\left(\frac{1}{(n+h)x}\right) \cdot \frac{(-1)}{(n+h) x^2} \nonumber\\
&= v_h (x) .
\end{align}
\par
In preparation for the next steps, we now recall and process
certain pertinent facts
that have already been established.
\par
We have seen that the functions
$u_0(x),u_1(x),u_2(x),\ldots $ (defined, implicitly, a few lines above \eqref{ProofC2})
are real-valued, and are defined on the interval $\left( (n+1)^{-1} , n^{-1}\right)$.
We found that, at all points $x$ of the same open interval,
the series $u_0(x) + u_1(x) + u_2(x) + \ldots $ is convergent
and the derivatives $u_0'(x),u_1'(x),u_2'(x),\ldots $ exist and are finite
(their values were computed in \eqref{ProofC6} and \eqref{ProofC7}).
Moreover, since the series $v_1(x) + v_2(x) + v_3(x) + \ldots $ was
found to be uniformly convergent on $\left[ (n+1)^{-1} , n^{-1}\right]$,
and since we have (by $\eqref{ProofC7}$) $u_h'(x) = v_h(x)$
whenever $(n+1)^{-1} < x < n^{-1}$ and $h\in{\mathbb N}$, we may make the (trivial) deductions
that the series $u_1'(x) + u_2'(x) + u_3'(x) + \ldots $ is uniformly convergent on
$\left( (n+1)^{-1} , n^{-1}\right)$, and that the same may therefore be said of
the series $u_0'(x) + u_1'(x) + u_2'(x) + \ldots $.
\par
Given the fact just noted (in the last paragraph), it follows by
\cite[Theorem~9.14]{Apostol 1974} that the function $x\mapsto \sum_{h=0}^{\infty} u_h(x)$
is differentiable at all points of the interval $\left( (n+1)^{-1} , n^{-1}\right)$,
and that one has:
\begin{equation}\label{ProofC8}
\frac{d}{dx} \sum_{h=0}^{\infty} u_h (x) =
\sum_{h=0}^{\infty} u_h' (x)
\quad\text{when $x$ satisfies \eqref{ProofC1}} .
\end{equation}
Subject to \eqref{ProofC1} holding, it follows by \eqref{ProofC2}, \eqref{ProofC8},
\eqref{ProofC6}, \eqref{ProofC7} and \eqref{ProofC4} that $\phi'(x)$ exists, and that one has
\begin{equation}\label{ProofC9}
\lambda^{-1} \phi'(x) =
\sum_{h=0}^{\infty} u_h'(x) =
\sum_{h=0}^{\infty} v_h(x) = g(x) .
\end{equation}
We recall that the function $g(x)$ was shown to be continuous on
the closed interval $\left[ (n+1)^{-1} , n^{-1}\right]$.
Thus it is a corollary of \eqref{ProofC9} that the derivative $\phi'(x)$
is a continuous function on $\left( (n+1)^{-1} , n^{-1}\right)$, and that one has:
\begin{equation}\label{ProofC10}
\lim_{x\rightarrow \frac{1}{n} -} \phi'(x) = \lambda g\left(\frac{1}{n}\right)
\quad\text{and}\quad
\lim_{x\rightarrow \frac{1}{n+1} +} \phi'(x) = \lambda g\left(\frac{1}{n+1}\right) .
\end{equation}
With the help of \eqref{ProofC5}, we deduce from \eqref{ProofC10} and \eqref{ProofC9}
what is stated in \eqref{LefthandDphi} and \eqref{RighthandDphi}, and also
the cases of \eqref{DerivativeFormula1} in which $x$ satisfies \eqref{ProofC1}.
This (as explained earlier) completes our proof of the theorem.
\end{proof}
\begin{corollary}
When $n\in{\mathbb N}$, the restriction of $\phi(x)$ to the
closed interval $\left[ (n+1)^{-1} , n^{-1}\right]$ is continuously differentiable on $\left[ (n+1)^{-1} , n^{-1}\right]$.
\end{corollary}
\begin{proof}
Let $n\in{\mathbb N}$, $a=\frac{1}{n+1}$ and $b=\frac{1}{n}$.
Let $\rho(x)$ is the restriction of $\phi(x)$ to the interval $[a,b]$.
\par
Suppose, firstly, that $a<y\leq b$. Then one has
$\frac{\rho(y) -\rho(a)}{y-a} = \frac{\phi(y)-\phi(a)}{y-a}$,
and so, since $\phi(x)$ is continuous on $[0,1]\supset [a,b] \supseteq [a,y]$,
and is differentiable on $(a,b)\supseteq (a,y)$,
it follows by the mean value theorem of differential calculus that,
for some $c\in (a,y)$, one has:
\begin{equation}\label{ProofD1}
\frac{\rho(y) -\rho(a)}{y-a} = \phi'(c) .
\end{equation}
Since we have here $a<c<y$, it follows that $c\rightarrow a+$ as $y\rightarrow a+$,
and so it may be deduced from \eqref{ProofD1} and \eqref{RighthandDphi} that one has:
\begin{equation}\label{ProofD2}
\rho'(a) := \lim_{y\rightarrow a+} \frac{\rho(y) -\rho(a)}{y-a} =
\lim_{c\rightarrow a+} \phi'(c) \in{\mathbb R} .
\end{equation}
Using instead \eqref{LefthandDphi}, one can show (similarly) that
\begin{equation}\label{ProofD3}
\rho'(b) := \lim_{y\rightarrow b-} \frac{\rho(b) -\rho(y)}{b-y} =
\lim_{c\rightarrow b-} \phi'(c) \in{\mathbb R} .
\end{equation}
When $a<z<b$, one has
\begin{equation}\label{ProofD4}
\frac{\rho(y) -\rho(z)}{y-z} = \frac{\phi(y)-\phi(z)}{y-z}
\quad \text{for all $y\in [a,z)\cup (z,b]$, }
\end{equation}
and so (given that $\phi'(z)$ exists and is finite, by virtue of $\phi'(x)$ being
continuous on $(a,b)$) one finds, by taking the limit as $y\rightarrow z$ of
both sides of \eqref{ProofD4}, that $\rho'(z) = \phi'(z)\in{\mathbb R}$ for
$a<z<b$. Thus $\rho'(x)$ is continuous on $(a,b)$ (since $\phi'(x)$ is),
and $\rho'(c)$ may be substituted for $\phi'(c)$ in both
\eqref{ProofD2} and \eqref{ProofD3}, so enabling us to conclude that
$\rho'(x)$ is also continuous at the points $x=a$ and $x=b$.
The derivative $\rho'(x)$ is therefore continuous on $[a,b]$.
\end{proof}
\begin{corollary}
The function $\phi(x)$ is continuously differentiable on $\bigl(\frac{1}{2} , 1\bigr]$. One has
\begin{equation}\label{DphiAt1}
{\mathbb R}\ni\phi'(1) = \lambda \Phi_1 - \lambda\sum_{m=2}^{\infty} \frac{\phi\left(\frac{1}{m}\right)}{m} ,
\end{equation}
and also:
\begin{equation}\label{DphiJump}
\phi_{+}'\left( \frac{1}{n}\right) - \phi_{-}'\left( \frac{1}{n}\right) =
-\lambda \phi(1) n
\quad\text{for $n=2,3,4,\ldots $ ,}
\end{equation}
where $\phi_{+}'(x)$ and $\phi_{-}'(x)$ are, respectively, the righthand and lefthand derivatives of $\phi(x)$
(so that $\phi_{\pm}'(x) := \lim_{y\rightarrow x\pm} \frac{\phi(y) - \phi(x)}{y-x}$).
\par
If $\phi(1) \neq 0$ then $\left\{ \frac{1}{2} , \frac{1}{3} , \frac{1}{4} , \ldots \right\}$ is the set of points of the interval $(0,1]$ at which $\phi(x)$ is not differentiable.
\par
If $\phi(1)=0$ then $\phi(x)$ is continuously differentiable on $(0,1]$, and
\eqref{DerivativeFormula1} holds for all $x\in (0,1]$.
\end{corollary}
\begin{proof}
Since the domain of $\phi(x)$ contains no number greater than $1$, it follows from the case $n=1$ of the preceding corollary that one has ${\mathbb R}\ni \phi'(1) = \lim_{x\rightarrow 1-}\phi'(x)$, so that $\phi'(x)$ is continuous at the point $x=1$. By this, together with the case $n=1$ of \eqref{LefthandDphi}, one obtains the result \eqref{DphiAt1}. Since we know (by Theorem~4.1) that $\phi'(x)$ is continuous on $\left(\frac{1}{2}, 1\right)$,
and have just found $\phi'(x)$ to be continuous at $x=1$,
it therefore follows (trivially) that $\phi(x)$ is continuously differentiable on $\bigl(\frac{1}{2}, 1\bigr]$.
\par
By Corollary~4.2 again (not only in the form stated, but also with $n-1$ substituted for $n$) we find that,
for either (consistent) choice of sign ($\pm$), one has:
\begin{equation}\label{ProofE1}
\phi_{\pm}'\left( \frac{1}{n}\right) = \lim_{x\rightarrow\frac{1}{n}\pm} \phi'(x)
\quad\text{for $n=2,3,4,\ldots $ . }
\end{equation}
The combination of \eqref{ProofE1}, \eqref{RighthandDphi} (with $n-1$ substituted for $n$) and \eqref{LefthandDphi}, yields (immediately) the result stated in \eqref{DphiJump}.
\par
Theorem~4.1 tells us that $\phi(x)$ is differentiable on each one of the open intervals
$\left(\frac12 ,1\right) , \left(\frac13 ,\frac12\right) , \left(\frac14 ,\frac13\right) ,\,\ldots$~, and so
(recalling \eqref{DphiAt1}) we may conclude that
the set $\left\{ \frac12 , \frac13 , \frac14 ,\,\ldots\,\right\}$ contains all points of the interval $(0,1]$ at which $\phi(x)$ is not differentiable.
If $\phi(1)\neq 0$ then, by \eqref{DphiJump}, it follows that, for $n=2,3,4,\ldots $ , we have
$\phi_{+}'(1/n) \neq \phi_{-}'(1/n)$.
Thus $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots $ are (in this case) points at which $\phi(x)$ is not differentiable.
\par
Suppose that one has instead $\phi(1)=0$, then \eqref{DphiJump} gives $\phi_{+}'(1/n) = \phi_{-}'(1/n)$, for $n=2,3,4,\ldots $.
Thus $\phi(x)$ is (in the case being considered) differentiable at
every point of the set $\left\{ \frac12 , \frac13 , \frac14 ,\,\ldots\,\right\}$.
By this, combined with the first of our conclusions in the preceding paragraph,
it follows that $\phi(x)$ is differentiable on $(0,1]$. By this and Corollary~4.2,
one may deduce that, for each $n\in{\mathbb N}$, the restriction of $\phi'(x)$ to
the interval $\left[(n+1)^{-1} , n^{-1}\right]$ is continuous on that same interval.
Therefore, given that each point in the sequence $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots\,$
is a left hand boundary point of one of the intervals in the sequence
$\left[\frac12 , 1\right] , \left[\frac13 , \frac12\right] , \left[\frac14 , \frac13\right] , \,\ldots $~, and is
(at the same time) a right hand boundary point of another interval from the same sequence, we may conclude that the continuity of the restrictions of $\phi'(x)$ to each of those intervals implies the continuity of $\phi'(x)$ at each point in the sequence $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \,\ldots $~. By this and the relevant result stated in Theorem~4.1, we find that $\phi'(x)$ is continuous on $(0,1)$. We showed (above) that, regardless of whether or not $\phi(1)=0$, the function $\phi'(x)$ is continuous at $x=1$. Thus we may now conclude that $\phi'(x)$ is continuous on $(0,1)\cup\{ 1\} = (0,1]$, provided that $\phi(1)$ equals $0$; moreover $\phi'(x)$ is then continuous at each point in the sequence $1, \frac{1}{2}, \frac{1}{3},\,\ldots $~, and so it follows by \eqref{LefthandDphi} that one has \eqref{DerivativeFormula1} for all values of $x$ in that sequence; we also know (from Theorem~4.1) that \eqref{DerivativeFormula1} holds at all points of the interval $(0,1]$ that are not terms of the sequence just mentioned: we conclude that, if $\phi(1)$ equals $0$, then \eqref{DerivativeFormula1} holds for all $x\in(0,1]$.
\end{proof}
\begin{lemma} The definite integral $\Phi_1$ that is defined in \eqref{DefPhi1} satisfies
\begin{equation*}
\left| \Phi_1\right| < {\textstyle\frac{3}{2}} | \lambda | .
\end{equation*}
\end{lemma}
\begin{proof}
Let $C_0$ be the constant defined in \eqref{DefC0}, and put
$\Delta := (2 C_0)^{-2/3} \lambda^{-2}$.
Then, since $C_0 > \frac{1}{3}$, it
follows by \eqref{SpectrumBound} that we have $2C_0 |\lambda |^3 > \frac{16}{3}$,
and so $0 < \Delta < \left( \frac{3}{16}\right)^{2/3} < 1$. Therefore, with the
help of the Cauchy-Schwarz ineqality, we obtain:
\begin{align*}
\left| \Phi_1\right| = \biggl| \int_0^1 \frac{\phi(y) dy}{y} \biggr|
&\leq \int_0^{\Delta} \frac{|\phi(y)| dy}{y} +
\left( \int_{\Delta}^1 \frac{dy}{y^2}\right)^{1/2} \| \phi \| \\
&= \int_0^{\Delta} \frac{|\phi(y)| dy}{y} + \left( \frac{1}{\Delta} - 1\right)^{1/2} \cdot 1 .
\end{align*}
We use Theorem~3.2 to bound the last of the integrals here, and so find that
$\left| \Phi_1\right| < \Delta C_0 |\lambda|^3 + \Delta^{-1/2}
= \left( 2^{-2/3} + 2^{1/3}\right) C_0^{1/3} |\lambda| =
\frac{3}{2} (2 C_0)^{1/3} |\lambda| < \frac{3}{2} |\lambda|$.
\end{proof}
\begin{lemma}
For $0<x\leq 1$, one has
\begin{equation*}
\Biggl| \sum_{m>\frac{1}{x}} \frac{\phi\left(\frac{1}{mx}\right)}{m} \Biggr| <
\left( {\textstyle\frac{3}{2}} + \log |\lambda |\right) |\lambda | .
\end{equation*}
\end{lemma}
\begin{proof}
We begin similarly to the proof of Lemma~4.4, but now put instead
$\Delta := (4C_0)^{-1} \lambda^{-2}$, so that
$0<\Delta < \frac{3}{4} \lambda^{-2} \leq \frac{3}{16} < 1$.
Let $0<x\leq 1$. By \eqref{PhiBoundedUniformly} and Theorem~3.2, one has:
\begin{align*}
\sum_{m>\frac{1}{x}} \frac{\left| \phi\left(\frac{1}{mx}\right)\right| }{m}
&\leq \sum_{\frac{1}{x} < m \leq \frac{1}{\Delta x}} \frac{|\lambda|}{2 m} +
\sum_{m>\frac{1}{\Delta x}} \frac{C_0 |\lambda|^3}{m^2 x} \\
&< {\textstyle\frac{1}{2}} |\lambda| \left( \int_{\frac{1}{x}}^{\frac{1}{\Delta x}} \frac{dy}{y} + x\right) +
C_0 |\lambda|^3 x^{-1} \left( \int_{\frac{1}{\Delta x}}^{\infty} \frac{dy}{y^2} + (\Delta x)^2\right) \\
&= {\textstyle\frac{1}{2}} |\lambda| \left( x + \log\left(\frac{1}{\Delta}\right)\right)
+ C_0 |\lambda|^3 x^{-1}\left( \Delta x + (\Delta x)^2\right) \\
&< {\textstyle\frac{1}{2}} |\lambda| \left( 1 + \log\left(\frac{1}{\Delta}\right)\right)
+ 2 C_0 |\lambda|^3 \Delta =
{\textstyle\frac{1}{2}} |\lambda| \left( 2 + \log\left(4 C_0 \lambda^2\right)\right) .
\end{align*}
Since $4 C_0 < 2 < e$, the desired bound follows.
\end{proof}
\begin{theorem} Let $0<x\leq 1$. If $\phi'(x)$ exists, then it satisfies
\begin{equation*}
\left| \phi'(x)\right| < \frac{\left( 3 + \log |\lambda |\right) \lambda^2}{x^2} .
\end{equation*}
\end{theorem}
\begin{proof}
Suppose that $\phi'(x)$ exists. Then, by Corollary~4.3 and Theorem~4.1, it follows
that $\phi'(x)$ is given by the equation \eqref{DerivativeFormula1}.
By \eqref{DerivativeFormula1} and Lemmas~4.4 and~4.5, it follows that one has
$|\lambda^{-1} x^2 \phi'(x)|
\leq \frac{3}{2} |\lambda| + \left( \frac{3}{2} + \log |\lambda|\right) |\lambda|$.
\end{proof}
\begin{lemma}
Let $0<x\leq 1$, and let $C_0$ be the positive constant given by \eqref{DefC0}.
Suppose that $\phi'(x)$ exists, and that $0<\Delta <1$.
Then one has
\begin{equation*}
\lambda^{-1} x \phi'(x) =
\int_{\Delta}^1 y\phi(y) dK(x,y) + E_1 ,
\end{equation*}
for some real number $E_1=E_1(\phi ;x,\Delta)$ that satisfies:
\begin{equation*}
\left| E_1\right| \leq \frac{3 C_0 |\lambda|^3 \Delta}{x} .
\end{equation*}
\end{lemma}
\begin{proof}
Using the definition of $K(x,y)$, given in \eqref{DefK},
we obtain the following reformulation of the above Riemann-Stieltjes integral:
\begin{align}\label{ProofF1}
\int_{\Delta}^1 y \phi(y) dK(x,y)
&= \int_{\Delta}^{1} y\phi(y) d\left( \left\lfloor\frac{1}{xy}\right\rfloor - \frac{1}{xy}\right) \nonumber\\
&= \int_{\Delta}^{1} y\phi(y) d\left\lfloor\frac{1}{xy}\right\rfloor -
\int_{\Delta}^{1} y\phi(y) d\left( \frac{1}{xy}\right) \nonumber\\
&=\sum_{\frac{1}{x} < m\leq \frac{1}{\Delta x}} (-1)\left( \frac{1}{xm}\right) \phi\left( \frac{1}{xm}\right) -
\int_{\Delta}^1 y\phi(y)\left( - \frac{1}{xy^2}\right) dy \nonumber\\
&=\frac{1}{x} \cdot \Biggl(\int_{\Delta}^1 \frac{\phi(y) dy}{y} -
\sum_{\frac{1}{x} < m\leq \frac{1}{\Delta x}} \frac{\phi\left( \frac{1}{xm}\right)}{m} \Biggr) \in{\mathbb R}
\end{align}
(note that Theorem~2.10 justifies all of these steps, since it implies
that the integrands $y\phi(y)$ and $y^{-1}\phi(y)$ are continuous on $[\Delta , 1]$).
Here (as in the proof of Theorem~4.6) we may apply \eqref{DerivativeFormula1}: using that
result, and also \eqref{DefPhi1}, we deduce from \eqref{ProofF1} that one has
\begin{equation*}
\int_{\Delta}^1 y \phi(y) dK(x,y) - \lambda^{-1} x \phi'(x) = E_1 ,
\end{equation*}
where
\begin{align*}
{\mathbb R}\ni E_1=E_1(\phi ; x, \Delta) &:= \frac{1}{x} \cdot \Biggl(\int_{\Delta}^1 \frac{\phi(y) dy}{y} - \Phi_1 +
\sum_{m > \frac{1}{\Delta x}} \frac{\phi\left( \frac{1}{xm}\right)}{m} \Biggr) \\
&= \frac{1}{x} \cdot \Biggl( \sum_{m > \frac{1}{\Delta x}} \frac{\phi\left( \frac{1}{xm}\right)}{m} -
\int_0^{\Delta} \frac{\phi(y) dy}{y} \Biggr) .
\end{align*}
As seen earlier (in the proofs of Lemmas~4.4 and~4.5), we have here both
\begin{equation*}
\Biggl| \int_0^{\Delta} \frac{\phi(y) dy}{y} \Biggr| \leq C_0 |\lambda |^3 \Delta
\quad\text{and}\quad
\Biggl| \sum_{m > \frac{1}{\Delta x}} \frac{\phi\left( \frac{1}{xm}\right)}{m} \Biggr| <
2C_0 |\lambda |^3 \Delta ,
\end{equation*}
and so may deduce the desired upper bound on $|E_1|$.
\end{proof}
\begin{remarks}
The kernel $K(x,y)$ is, by \eqref{DefK}, a function on $[0,1]\times[0,1]$ of
the form $(x,y)\mapsto f(xy)$, where $f(t)$ is a certain real-valued
function on $[0,1]$ that has discontinuities at the points
$\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots $ , and at the point $t=0$.
If, instead of \eqref{DefK}, we had $K(x,y)=g(xy)$ ($0\leq x,y\leq 1$), where
$g(t)$ was some real-valued function that was continuously differentiable on $[0,1]$,
then it could be argued that \eqref{DefEigenfunction} would imply that
\begin{align*}
\lambda^{-1} \phi'(x)
& = \frac{d}{dx} \int_0^1 g(xy) \phi(y) dy \\
&= \int_0^1 \phi(y) \left( \frac{\partial}{\partial x} \left( g(xy)\right)\right) dy \\
&= \int_0^1 \phi(y) y g'(xy) dy \\
&= \int_0^1 \phi(y) y x^{-1} \left( \frac{\partial}{\partial z} g(xz)\right)\biggr|_{z=y} dy
= \frac{1}{x} \int_0^1 y \phi(y) dg(xy) ,
\end{align*}
and so we could conclude that $\lambda^{-1} x\phi'(x) = \int_0^1 y \phi(y) dK(x,y)$ for $0<x\leq 1$.
As things stand (i.e. with $K$ as defined in \eqref{DefK}), the
above argument lacks validity: yet Lemma~4.7 does get us to within
touching distance of the same conclusion, since it implies that
whenever $\phi'(x)$ exists one has
\begin{equation*}
\lambda^{-1} x\phi'(x) = \lim_{\Delta\rightarrow 0+} \int_{\Delta}^1 y \phi(y) dK(x,y) .
\end{equation*}
\end{remarks}
\begin{lemma}
Let $0<x\leq 1$, and let $C_0$ be the positive constant given by \eqref{DefC0}.
Suppose that $\phi'(x)$ exists, and that $0<\Delta <1$.
Then one has
\begin{equation*}
\lambda^{-1} x \phi'(x) =
\phi(1) K(x,1) - \lambda^{-1} \phi(x) - \int_{\Delta}^1 K(x,y) y \phi'(y) dy + E_2 ,
\end{equation*}
for some real number $E_2=E_2(\phi ;x,\Delta)$ that satisfies:
\begin{equation*}
\left| E_2\right| \leq \frac{4 C_0 |\lambda|^3 \Delta}{x} .
\end{equation*}
\end{lemma}
\begin{proof}
The hypotheses permit the application of Lemma~4.7:
by applying integration by parts \cite[Theorem~7.6]{Apostol 1974} to the
integral that occurs in that lemma, we find that one has
\begin{equation}\label{ProofG1}
\lambda^{-1} x \phi'(x) =
1 \phi(1) K(x,1) - \Delta \phi(\Delta) K(x,\Delta ) - \int_{\Delta}^1 K(x,y) d\left( y \phi(y)\right) + E_1 ,
\end{equation}
where $E_1$ is as stated in Lemma~4.7. For any given $M\in{\mathbb N}$,
one has here
\begin{multline}\label{ProofG2}
\int_{\Delta}^1 K(x,y) d\left( y \phi(y)\right) \\
= \int_{\Delta}^{\frac{1}{M}} K(x,y) d\left( y \phi(y)\right)
+ \sum_{1<m\leq M} \int_{\frac{1}{m}}^{\frac{1}{m-1}} K(x,y) d\left( y \phi(y)\right) .
\end{multline}
We choose to apply this in the case where $M=\lceil 1/\Delta \rceil -1$.
In this case we have $M<1/\Delta \leq M+1$, so that $\frac{1}{M+1} \leq \Delta < \frac{1}{M}$:
note also that, since $M\in{\mathbb Z}$ and $M+1\geq 1/\Delta > 1$, we do indeed have $M\in{\mathbb N}$.
It follows that every range of integration occurring on the right hand side of \eqref{ProofG2}
is a non-empty subinterval of some interval in the sequence
$\left[\frac12 , 1\right] , \left[\frac13 , \frac12\right] , \left[\frac14 , \frac13\right] , \,\ldots\,$~:
we have, in particular, $\bigl[ \frac{1}{M+1} , \frac{1}{M}\bigr] \supseteq \bigl[ \Delta , \frac{1}{M}\bigr] \neq \emptyset$. Thus, by virtue of Corollary~4.2, it can be
deduced from \eqref{ProofG2} that one has
\begin{align*}
\int_{\Delta}^1 K(x,y) d\left( y\phi(y) \right)
&= \int_{\Delta}^{\frac{1}{M}} K(x,y) \left( \phi(y) + y \phi'(y) \right) dy \\
&\phantom{{=}} + \sum_{1<m\leq M} \int_{\frac{1}{m}}^{\frac{1}{m-1}} K(x,y) \left( \phi(y) + y \phi'(y) \right) dy \\
&=\int_{\Delta}^1 K(x,y) \phi(y) dy + \int_{\Delta}^1 K(x,y) y \phi'(y) dy .
\end{align*}
By this, together with \eqref{ProofG1} and \eqref{DefEigenfunction}, we find that the
equality stated in the lemma is satisfied when one has
\begin{equation}\label{ProofG3}
E_2 = - \Delta \phi(\Delta) K(x,\Delta ) + \int_0^{\Delta} K(x,y) \phi(y) dy + E_1 .
\end{equation}
By Lemma~4.7, \eqref{DefK} and Theorem~3.2,
we have here:
\begin{equation*}
|E_1|\leq 3C_0 |\lambda|^3 \Delta x^{-1} ,
\quad
\left| \Delta \phi(\Delta) K(x,\Delta )\right| \leq {\textstyle\frac{1}{2}} \Delta \cdot C_0 |\lambda|^3 \Delta
= {\textstyle\frac{1}{2}} C_0 |\lambda|^3 \Delta^2
\end{equation*}
and
\begin{equation*}
\int_0^{\Delta} \left| K(x,y) \phi(y) \right| dy \leq
{\textstyle\frac{1}{2}} C_0 |\lambda|^3 \int_0^{\Delta} y dy = {\textstyle\frac{1}{4}} C_0 |\lambda|^3\Delta^2 .
\end{equation*}
Since we have also $\Delta < 1\leq x^{-1}$, the desired upper bound for $|E_2|$
follows from the last three bounds above and \eqref{ProofG3}.
\end{proof}
\begin{theorem}
For all $x\in(0,1]$ such that $\phi'(x)$ exists, one has:
\begin{equation*}
x \phi'(x) = O\left( |\lambda |^3 \log^2 |\lambda | \right) ,
\end{equation*}
where the implicit constant is absolute.
\end{theorem}
\begin{proof}
Let $0<x\leq 1$. Suppose that $\phi'(x)$ exists.
Then, by Lemma~4.8,
\eqref{DefK}, \eqref{PhiBoundedUniformly}, \eqref{SpectrumBound} and the triangle inequality,
one may deduce that
\begin{equation}\label{ProofH1}
\left| x \phi'(x) \right| \leq
{\textstyle\frac{1}{2}} \lambda^2 +
{\textstyle\frac{1}{2}} |\lambda | \int_{\Delta}^1 \left| y \phi'(y)\right| dy +
O\left( \lambda^4 x^{-1} \Delta\right)
\end{equation}
for $0<\Delta <1$.
By Theorem~4.6 and \eqref{SpectrumBound}, we have here
\begin{equation*}
\int_{\Delta}^1 \left| y \phi'(y)\right| dy \leq
\left( 3 + \log |\lambda|\right) \lambda^2
\int_{\Delta}^1 \frac{dy}{y} =
O\left( \left( \lambda^2 \log |\lambda| \right) \log\left( \frac{1}{\Delta}\right) \right) .
\end{equation*}
Thus we obtain, in particular,
\begin{equation*}
\left| x \phi'(x) \right| \leq
{\textstyle\frac{1}{2}} \lambda^2 +
O\left( \left( |\lambda |^3 \log |\lambda| \right) \log\left( \frac{1}{\Delta}\right) \right) +
O\left( \lambda^4 x^{-1} \Delta\right)
\end{equation*}
when $\Delta = |\lambda |^{-1} x$ (for, by \eqref{SpectrumBound},
one does have $0<\Delta < 1$ in this case).
This gives us:
\begin{align}\label{ProofH2}
\left| x \phi'(x) \right| &= O\left( \lambda^2 +
\left( |\lambda |^3 \log |\lambda| \right) \log\left( |\lambda| / x\right) + |\lambda |^3 \right) \nonumber\\
&=O\left( \left( \log\left( \frac{1}{x}\right) + \log |\lambda| \right)
|\lambda |^3 \log |\lambda | \right) .
\end{align}
\par
We now repeat, with one change, the steps that led to \eqref{ProofH2}.
The change that we make is to apply \eqref{ProofH2}, instead of
Theorem~4.6, to that part of the integral $\int_0^1 \left| y \phi'(y) dy\right| dy$
where $y<|\lambda|^{-1}$: note that we still put $\Delta = |\lambda |^{-1} x$, and
so (given that $|\lambda|\geq 2$ and $\log(1/x)\geq 0$) will have
$1 > |\lambda|^{-1} \geq \Delta$. We find that one has
\begin{align*}
\int_{\Delta}^1 \left| y \phi'(y)\right| dy
&\leq
O\left( \lambda^2 \log |\lambda|\right) \cdot
\int_{\frac{1}{|\lambda|}}^1 \frac{dy}{y} +
O\left( |\lambda |^3 \log |\lambda|\right)\cdot \int_{\Delta}^{\frac{1}{|\lambda|}}
\log\left( \frac{1}{y}\right) dy \\
&\leq O\left( |\lambda|^3 \log |\lambda|\right) \cdot
\left( \frac{\log |\lambda |}{|\lambda|} +
\int_0^{\frac{1}{|\lambda|}} \log\left( \frac{1}{y}\right) dy \right) \\
&= O\left( |\lambda|^3 \log |\lambda|\right) \cdot
\left( \frac{1 + 2\log |\lambda |}{|\lambda|} \right) = O\left( \lambda^2 \log^2 |\lambda|\right) .
\end{align*}
By means of this last estimate and the case $\Delta = |\lambda |^{-1} x$ of \eqref{ProofH1},
one finds that the desired bound for $|x\phi'(x)|$ is obtained.
\end{proof}
\begin{corollary}
Let $j\in{\mathbb N}$ and $0<y\leq x\leq 1$. Then one has both
\begin{equation}\label{LogarithmicBoundOnIncrements}
\left| \phi_j (x) - \phi_j (y)\right| =
O\left( \left|\lambda_j\right|^3 \left( \log \left| \lambda_j\right|\right)^2 \cdot
\log\left(\frac{x}{y}\right) \right)
\end{equation}
and
\begin{equation}\label{ImprovesCor2.14}
\left| \phi_j (x) - \phi_j (y)\right| \leq
\left( 3 + \log\left| \lambda_j\right|\right) \lambda_j^2 \cdot\left(\frac{1}{y} - \frac{1}{x}\right) .
\end{equation}
\end{corollary}
\begin{proof}
By \eqref{IndefiniteIntegral} (applied twice), we have:
\begin{equation}\label{ProofR1}
\left| \phi_j (x) - \phi_j (y)\right| = \left| \int_y^x \phi_j' (z) dz\right|
\leq \int_y^x \left| \phi_j' (z)\right| dz .
\end{equation}
The results \eqref{LogarithmicBoundOnIncrements} and \eqref{ImprovesCor2.14}
follow by combining \eqref{ProofR1} with Theorems~4.9 and~4.6, respectively.
\end{proof}
\begin{theorem}
The function $x\mapsto x \phi'(x)$ (defined almost everywhere in $[0,1]$)
is both measurable and square integrable on $[0,1]$. One has
\begin{equation*}
\int_0^1 \left( x \phi'(x)\right)^2 dx = O\left( |\lambda |^5 \log^3 |\lambda| \right) .
\end{equation*}
\end{theorem}
\begin{proof}
By Corollary~4.2 (or Corollary~4.3), the set of points of the interval $[0,1]$
at which $\phi'(x)$ is not defined is a set that is countable, and so
has Lebesgue measure $0$. Note that Corollary~4.2 implies also
that $\phi'(x)$ is finite at all those points of the interval $(0,1]$ where it exists.
We may therefore conclude that the functions $\phi'(x)$ and $x\mapsto x \phi'(x)$
are each defined almost everywhere in $[0,1]$, and that the latter is finite (and so real-valued) at
all points where it is defined.
\par
For $x\in [0,1]$ and $n\in{\mathbb N}$, put
$f_n(x) :=
(n+1)\phi\left( \frac{nx+1}{n+1}\right) - (n+1)\phi\left( \frac{nx}{n+1}\right)$.
By Theorem~2.10, the functions $f_1(x),f_2(x),f_3(x),\ldots $ are continuous on
$[0,1]$, and are therefore measurable on $[0,1]$.
Since $\phi'(x)$ is defined almost everywhere in $[0,1]$, it can be deduced
(from the usual definition of $\phi'(x)$ as a limit)
that we have $\lim_{n\rightarrow\infty} f_n (x) = \phi'(x)$ almost everywhere in $[0,1]$.
We may conclude from this that,
since all terms of the sequence $f_1(x),f_2(x),f_3(x),\ldots $ are measurable on $[0,1]$,
so too is $\phi'(x)$: see, for example, \cite[Theorem~4.12]{Wheeden and Zygmund 2015} regarding this point.
\par
Since $\phi'(x)$ is measurable on $[0,1]$, so is its product with any other such
function: the functions $x\mapsto x\phi'(x)$ and $x\mapsto \left( x\phi'(x)\right)^2$,
in particular, are measurable on $[0,1]$. By Theorem~4.9, there exists a real number $b$ (say)
such that one has $0\leq \left( x\phi'(x)\right)^2\leq b$ almost everywhere in $[0,1]$.
It follows that we have $\int_0^1 \left( x\phi'(x)\right)^2 dx \leq \int_0^1 b dx = b < \infty$.
The measurable function $x\mapsto x\phi'(x)$ is, therefore, square integrable on $[0,1]$.
By the bounds of Theorems~4.6 and~4.9, we have also
\begin{align*}
\int_0^1 \left( x\phi'(x)\right)^2 dx
&=O\left( \lambda^6 \log^4 |\lambda|\right) \cdot\int_0^{\frac{1}{|\lambda | \log |\lambda |}} dx
+ O\left( \lambda^4 \log^2 |\lambda|\right) \cdot\int_{\frac{1}{|\lambda | \log |\lambda |}}^1 \frac{dx}{x^2} \\
&= O\left( \lambda^6 \log^4 |\lambda|\right) \cdot \left( |\lambda | \log |\lambda | \right)^{-1}
+ O\left( \lambda^4 \log^2 |\lambda|\right) \cdot |\lambda | \log |\lambda | ,
\end{align*}
and so we obtain the last part of the theorem.
\end{proof}
\begin{definitions}
We put now:
\begin{equation}\label{DefQ(x)}
Q(x) := x \phi'(x) \quad \text{($0\leq x\leq 1$ and $\phi'(x)$ is defined and finite)}
\end{equation}
and
\begin{equation}\label{DefP(x)}
P(x) := -\lambda \int_0^1 K(x,y) Q(y) dy \quad \text{($0\leq x\leq 1$).}
\end{equation}
\end{definitions}
Note that it follows immediately from \eqref{DefQ(x)} and Theorem~4.11 that
the function $Q(x)$ is both measurable and square integrable on $[0,1]$.
The same is true, when $x\in [0,1]$ is given, of the
function $y\mapsto K(x,y)$: see \eqref{HalfHilbertSchmidt}.
It therefore follows by the Cauchy-Schwarz inequality, combined with \eqref{HSnorm} and
theorems of Tonelli and Fubini (for which see \cite[Theorems~6.1 and~6.10]{Wheeden and Zygmund 2015}),
that the function $P(x)$, given by \eqref{DefP(x)}, is an element
of $L^2 \bigl( [0,1]\bigr)$.
With this, we are able to justify our next lemma, and the further definitions that follow it.
\begin{theorem} Let $0<x\leq 1$. Suppose that $\phi'(x)$ exists.
Then one has
\begin{equation*}
x \phi'(x) + \phi(x) - \lambda\phi(1) K(x,1) = P(x) .
\end{equation*}
\end{theorem}
\begin{proof}
By Lemma~4.8 and Definitions~4.12, we find that
\begin{multline*}
x \phi'(x) + \phi(x) - \lambda\phi(1) K(x,1) \\
= \lambda \int_0^{\Delta} K(x,y) y \phi'(y) dy
+ P(x) + O\left( \lambda^4 x^{-1} \Delta\right)
\end{multline*}
for all $\Delta\in (0,1)$. The theorem will therefore follow if it can be
shown that ${\cal E}(\Delta) := \int_0^{\Delta} K(x,y) y \phi'(y) dy$
satisfies ${\cal E}(\Delta)\rightarrow 0$,
in the limit as $\Delta\rightarrow 0+$. To this end, we observe that
it is a consequence of Theorems~4.9 and~4.11, and the definition \eqref{DefK}, that
one has
\begin{equation*}
\int_0^{\Delta} | K(x,y) y \phi'(y)| dy =
O\left( |\lambda |^3 \log^2 |\lambda| \right) \cdot \int_0^{\Delta} dy
=O\left( \Delta |\lambda |^3 \log^2 |\lambda| \right) \end{equation*}
for $0<\Delta < 1$. We therefore have
$\left| {\cal E}(\Delta)\right| \leq O\left( \Delta |\lambda |^3 \log^2 |\lambda| \right)$,
for $0<\Delta < 1$, and so may deduce that $\lim_{\Delta\rightarrow 0+} {\cal E}(\Delta) = 0$.
The theorem follows.
\end{proof}
\begin{definitions}
For $h\in{\mathbb N}$, we put:
\begin{equation}\label{Def-a_h}
a_h := \left\langle P , \phi_h\right\rangle = \int_0^1 P(x) \phi_h (x) dx
\end{equation}
and
\begin{equation}\label{Def-b_h}
b_h := \int_0^1 Q(x) \phi_h (x) dx .
\end{equation}
\end{definitions}
\par
Since we have already found that both
$P(x)$ and $Q(x)$ are measurable and square integrable on $[0,1]$,
and since the same is true of all the eigenfunctions, $\phi_1(x),\phi_2(x),\phi_3(x),\ldots $,
it therefore follows by the Cauchy Schwarz inequality that, for each $h\in{\mathbb N}$,
both of the functions $x\mapsto P(x)\phi_h(x)$ and $x\mapsto Q(x)\phi_h(x)$
are integrable on $[0,1]$. Thus the integrals occurring in \eqref{Def-a_h} and \eqref{Def-b_h} exist,
and have finite values: so we have $a_h,b_h\in{\mathbb R}$ for all $h\in{\mathbb N}$.
\begin{lemma}
For $h\in{\mathbb N}$, one has
\begin{equation*}
b_h = -\left( \frac{\lambda_h}{\lambda}\right) a_h .
\end{equation*}
\end{lemma}
\begin{proof}
Let $h\in{\mathbb N}$. As noted in \cite[Sections~3.9--3.10]{Tricomi 1957}, it follows
by \eqref{DefEigenfunction}, \eqref{DefK}, \eqref{DefP(x)} and Fubini's theorem for double integrals,
that one has
\begin{align*}
\int_0^1 Q(x) \phi_h (x) dx &=
\int_0^1 Q(x) \lambda_h \biggl(\int_0^1 K(x,y) \phi_h (y) dy\biggr) dx \\
&= \lambda_h \int_0^1 \biggl(\int_0^1 K(x,y) Q(x) dx\biggr) \phi_h (y) dy \\
&= \lambda_h \int_0^1 \biggl(\int_0^1 K(y,x) Q(x) dx\biggr) \phi_h (y)dy \\ &=
\lambda_h \left\langle (-\lambda)^{-1} P , \phi_h \right\rangle
= \lambda_h (-\lambda)^{-1} \left\langle P , \phi_h \right\rangle .
\end{align*}
By this and Definitions~4.14, one has $b_h = -\lambda^{-1}\lambda_h a_h$.
\end{proof}
\begin{theorem}[Hilbert-Schmidt]
The series
\begin{equation}\label{HSseries}
a_1 \phi_1(x) + a_2 \phi_2(x) + a_3 \phi_3(x) + \ldots
\end{equation}
converges both absolutely and uniformly on $[0,1]$. For $0\leq x\leq 1$, one has
\begin{equation}\label{HSthm}
\sum_{h=1}^{\infty} a_h \phi_h (x) = P(x) .
\end{equation}
\end{theorem}
\begin{proof}
This theorem is, in essence, just one specific case of the `Hilbert-Schmidt theorem' that is
proved in \cite[Section~3.10]{Tricomi 1957}: note, in particular, that it follows by
virtue of Definitions~4.12 and Theorem~4.11 that the Hilbert-Schmidt theorem
is applicable to $P(x)$. However, the Hilbert-Schmidt theorem does not quite
show that \eqref{HSthm} holds for all $0\leq x\leq 1$: it shows only that this equality
holds almost everywhere in the interval $[0,1]$ (regarding this, see the Remarks
following this proof). For this reason, we give more details regarding the proof of our theorem.
\par
We note, firstly, that the absolute and uniform convergence of the series \eqref{HSseries}
can be established by means of the steps in \cite[Page~112, Paragraph~1]{Tricomi 1957}:
one may, in particular, put $N=\frac{1}{2}$ there, by virtue of the case $p=2$ of
\eqref{HalfHilbertSchmidt}. Therefore, in order to complete this proof, we need only
show that one has $\lim_{H\rightarrow\infty} \bigl( P(x) - \sum_{h=1}^H a_h \phi_h (x)\bigr) = 0$
for $0\leq x\leq 1$.
Accordingly, we suppose now that $x\in [0,1]$.
By \eqref{DefEigenfunction}, \eqref{DefP(x)}-\eqref{Def-b_h} and Lemma~4.15,
we find (similarly to \cite[Page~111, Paragraph~2]{Tricomi 1957}) that one has
\begin{equation*}
P(x) - \sum_{h=1}^H a_h \phi_h (x)
= -\lambda\int_0^1 \biggl( K(x,y) - \sum_{h=1}^H \frac{\phi_h (x) \phi_h (y)}{\lambda_h}\biggr) Q(y) dy ,
\end{equation*}
for all $H=1,2,3,\ldots\ $. It therefore follows, by the Cauchy-Schwarz inequality, that,
for all $H=1,2,3,\ldots\ $, one has:
\begin{multline*}
\biggl( P(x) - \sum_{h=1}^H a_h \phi_h (x) \biggr)^2 \\
\leq \lambda^2
\left( \int_0^1 \biggl( K(x,y) - \sum_{h=1}^H \frac{\phi_h (x) \phi_h (y)}{\lambda_h}\biggr)^2 dy \right)
\left( \int_0^1 Q^2 (y) dy\right) .
\end{multline*}
Since we have here $\int_0^1 Q^2 (y) dy < \infty$ (by Theorem~4.11), we may therefore
deduce from the result \eqref{K(x,y)_in_mean} of Corollary~2.12 that one
does indeed have $\lim_{H\rightarrow\infty} \bigl( P(x) - \sum_{h=1}^H a_h \phi_h (x)\bigr) = 0$,
as required.
\end{proof}
\begin{remarks}
Since the entire latter part of the above proof is very similar indeed to
the reasoning that can be found in \cite[Page~111, Paragraph~2]{Tricomi 1957},
we should point out that, where we have appealed to our result \eqref{K(x,y)_in_mean},
Tricomi relies instead upon the result
\begin{equation} \label{l.i.m.}
\lim_{H\rightarrow\infty} \int_0^1\int_0^1 \left( K(x,y) -
\sum_{h=1}^H \frac{\phi_h (x) \phi_h(y)}{\lambda_h} \right)^2 dx dy = 0 ,
\end{equation}
stated (in other notation) in \cite[Section~3.9, Equation~(3)]{Tricomi 1957}.
By itself, this latter result implies only that, almost everywhere in $[0,1]$,
one has $\lim_{H\rightarrow\infty} \int_0^1 \bigl( K(x,y) -
\sum_{h=1}^H \lambda_h^{-1} \phi_h (x) \phi_h(y) \bigr)^2 dy = 0$:
whereas we know, by \eqref{K(x,y)_in_mean},
that this equality holds for all $x\in [0,1]$. This (we hope) explains
our earlier assertion to the effect that
the Hilbert-Schmidt theorem proved in \cite[Section~3.10]{Tricomi 1957}
does not (by itself) show that the equality \eqref{HSthm}
holds for all $x\in [0,1]$.
\end{remarks}
\begin{corollary}
The function $P(x)$ is continuous on $[0,1]$. In particular, one has
$P(x)\rightarrow P(0)=0$, in the limit as $x\rightarrow 0+$.
\end{corollary}
\begin{proof}
Each term of the series \eqref{HSseries} is
(by Theorem~2.10) a continuous function on $[0,1]$. Therefore it follows, given
the fact of the uniform convergence of this series (noted in Theorem~4.16), that this series converges
(pointwise) to a sum that is a continuous function on $[0,1]$. By Theorem~4.16 (again),
the sum in question is identically equal to $P(x)$, and so $P(x)$ is continuous on $[0,1]$.
In order to complete the proof, we observe that, by the definitions \eqref{DefK} and
\eqref{DefP(x)}, one has $P(0) = -\lambda \int_0^1 K(0,y)Q(y) dy = -\lambda \int_0^1 0 dy = 0$.
\end{proof}
\begin{remarks}
In view of the above corollary, Theorem~2.10 and the definition of $K(x,y)$ in \eqref{DefK},
one can observe that the discontinuities of $\phi' (x)$
(for which see Theorem~4.1 and Corollaries~4.2 and~4.3)
are fully accounted for by the presence, in the result of Theorem~4.13, of the term
$\lambda \phi (1) K(x,1)$.
\end{remarks}
\begin{corollary}
The series $a_1 \phi_1(x) + a_2 \phi_2(x) + a_3 \phi_3(x) + \ldots $
is `convergent in the mean' to the function $P(x)$, in that one has
\begin{equation}\label{P(x)_in_mean}
\lim_{H\rightarrow\infty}\int_0^1 \biggl( P(x) - \sum_{h=1}^H a_h \phi_h(x)\biggr)^2 dx = 0 .
\end{equation}
The series $b_1 \phi_1(x) + b_2 \phi_2(x) + b_3 \phi_3(x) + \ldots $ converges
in the mean to $Q(x)$: one has
\begin{equation}\label{Q(x)_in_mean}
\lim_{H\rightarrow\infty}\int_0^1 \biggl( Q(x) - \sum_{h=1}^H b_h \phi_h(x)\biggr)^2 dx = 0 .
\end{equation}
One has, moreover,
\begin{equation}\label{a_h,b_h-formulae}
-\left( \frac{\lambda_h}{\lambda}\right) \left( 1 + \frac{\lambda_h}{\lambda}\right) a_h
= \left( 1 + \frac{\lambda_h}{\lambda}\right) b_h
= \phi(1) \phi_h (1) - \left\langle \phi , \phi_h\right\rangle
\quad \text{($h\in{\mathbb N}$).}
\end{equation}
\end{corollary}
\begin{proof}
For $H\in{\mathbb N}$, the integral
$\int_0^1 \bigl( P(x) - \sum_{h=1}^H a_h \phi_h(x)\bigr)^2 dx \geq 0$
is less than or equal to the square of
$\sup_{0\leq x\leq 1} \bigl| P(x) - \sum_{h=1}^H a_h \phi_h(x)\bigr|$.
Since we know also (by Theorem~4.16) that
$\sup_{0\leq x\leq 1} \bigl| P(x) - \sum_{h=1}^H a_h \phi_h(x)\bigr|\rightarrow 0$, as $H\rightarrow\infty$,
we therefore can deduce that \eqref{P(x)_in_mean} holds.
\par
We now put, for each $h\in{\mathbb N}$,
\begin{equation*}
c_h := a_h + \frac{\lambda \phi(1) \phi_h (1)}{\lambda_h} - \left\langle \phi , \phi_h\right\rangle .
\end{equation*}
Since it is assumed that $\phi(x)$ is the eigenfunction $\phi_j(x)$,
we have here that $\left\langle \phi , \phi_h\right\rangle$ equals $1$ if $h=j$, and
is otherwise equal to $0$. Let $H\geq j$ be a positive integer.
Then, by \eqref{DefQ(x)} and Theorem~4.13, one has
\begin{align*}
Q(x) - \sum_{h=1}^H c_h \phi_h (x)
&= P(x) - \sum_{h=1}^H a_h \phi_h (x) \\
&\phantom{{=}} + \lambda \phi(1) \biggl( K(x,1) - \sum_{h=1}^H \frac{\phi_h(x) \phi_h(1)}{\lambda_h} \biggr) ,
\end{align*}
for all $x\in (0,1]$ such that $\phi'(x)$ exists.
Given this, together with Theorem~4.11 and \eqref{DefQ(x)}, we find
(via an application of the Cauchy-Schwarz inequality)
that one has
\begin{multline*}
\int_0^1 \biggl( Q(x) - \sum_{h=1}^H c_h \phi_h (x) \biggr)^2 dx \\
\leq \left( 1 + \lambda^2 \phi^2 (1)\right)
\Biggl( \int_0^1 \biggl( P(x) - \sum_{h=1}^H a_h \phi_h (x)\biggr)^2 dx \\
+ \int_0^1 \biggl( K(x,1) - \sum_{h=1}^H \frac{\phi_h(x) \phi_h(1)}{\lambda_h} \biggr)^2 dx
\Biggr) .
\end{multline*}
Therefore it follows, by \eqref{P(x)_in_mean}, \eqref{K(x,y)_in_mean} and the symmetry of
the kernel $K$, that one has
\begin{equation}\label{ProofI1}
\int_0^1 \biggl( Q(x) - \sum_{h=1}^H c_h \phi_h (x) \biggr)^2 dx \rightarrow 0 ,
\quad \text{as $H\rightarrow\infty$.}
\end{equation}
It is, at the same time, a consequence of Theorem~4.11, \eqref{DefQ(x)}, \eqref{Def-b_h}
and the orthonormality of $\phi_1(x),\phi_2(x),\phi_3(x),\ldots $ ,
that one has
\begin{multline*}
\int_0^1 \biggl( Q(x) - \sum_{h=1}^H c_h \phi_h (x) \biggr)^2 dx \\
\begin{aligned}
&= \int_0^1 \biggl( Q(x) - \sum_{h=1}^H b_h \phi_h (x) \biggr)^2 dx
+ \sum_{h=1}^H \left( c_h - b_h\right)^2 \\
&\geq \sum_{h=1}^H \left( c_h - b_h\right)^2 \geq \left( c_k - b_k\right)^2
\quad \text{when $1\leq k\leq H$}
\end{aligned}
\end{multline*}
(see \cite[Section~3.2]{Tricomi 1957} regarding this).
By this and \eqref{ProofI1}, it is necessarily the case that one has
$c_k = b_k$ for all $k\in{\mathbb N}$. We can therefore deduce from \eqref{ProofI1}
that \eqref{Q(x)_in_mean} holds.
\par
Let $h\in{\mathbb N}$. Recalling the definition of $c_h$, and also Lemma~4.15,
we have now that
$-\lambda^{-1}\lambda_h a_h = b_h = c_h
:= a_h + \lambda \lambda_h^{-1} \phi(1) \phi_h (1) - \left\langle \phi , \phi_h\right\rangle$.
The equations in \eqref{a_h,b_h-formulae} follow from this
(given that $\left\langle \phi , \phi_h\right\rangle = 0$ whenever $\lambda_h \neq \lambda$).
\end{proof}
\medskip
The next corollary includes a result involving the real constant $K_2 (1,1)$.
By \eqref{K2xxactly} and \eqref{DefK}, this constant is the number
$\log (2\pi) - \frac74 = 0{\cdot}087877\ldots\ $.
\medskip
\begin{corollary}[Parseval identities]
One has
\begin{equation}\label{ParsevalForPandQ}
\sum_{h=1}^{\infty} a_h^2 = \| P \|^2 \in{\mathbb R} ,
\quad
\sum_{h=1}^{\infty} b_h^2 = \int_0^1 Q^2(x) dx \in{\mathbb R}
\end{equation}
and
\begin{equation}\label{ParsevalWithTapering}
\sum_{h=1}^{\infty} \left( 1 + \frac{\lambda_h}{\lambda} \right)^2 a_h^2
= K_2(1,1) \lambda^2 \phi^2(1) - 2 \phi^2 (1) + 1 .
\end{equation}
\end{corollary}
\begin{proof}
We have seen that $P(x)$ is measurable and square integrable on $[0,1]$.
Therefore, given \eqref{Def-a_h} and the orthonormality of $\phi_1(x),\phi_2(x),\phi_3(x),\ldots $ ,
it follows (see \cite[Section~3.2]{Tricomi 1957}) that a
necessary and sufficient condition for \eqref{P(x)_in_mean} to hold
is that one has $\sum_{h=1}^{\infty} a_h^2 = \int_0^1 P^2 (x) dx$.
Thus, since we showed already (in Corollary~4.18) that \eqref{P(x)_in_mean} does hold,
and since $\int_0^1 P^2 (x) dx = \| P \|^2\in{\mathbb R}$ (by virtue of
$P(x)$ being square integrable on $[0,1]$), we
must have $\sum_{h=1}^{\infty} a_h^2 = \int_0^1 P^2 (x) dx = \| P \|^2\in{\mathbb R}$.
This proves the first part of \eqref{ParsevalForPandQ}: given
\eqref{Q(x)_in_mean}, \eqref{Def-b_h}, \eqref{DefQ(x)} and Theorem~4.11,
one can give a similar proof of the other part.
\par
By \eqref{P(x)_in_mean} and \eqref{Q(x)_in_mean},
the series $\left( b_1 - a_1\right)\phi_1(x) +
\left( b_2 - a_2\right)\phi_2(x) +
\left( b_3 - a_3\right) \\ \phi_3(x) + \ldots $
is convergent in the mean to the function $x\mapsto Q(x) - P(x)$,
which (by \eqref{DefQ(x)}, Theorem~4.13 and Corollary~4.3) is identical almost
everywhere in $[0,1]$ to the function $x\mapsto \lambda \phi(1) K(x,1) - \phi(x)$.
Since we have also $b_h - a_h
= - \left( 1 + \lambda^{-1} \lambda_h\right) a_h$ for all $h\in{\mathbb N}$
(by virtue of Lemma~4.15), it therefore follows (similarly to how we were able to deduce
the equalities in \eqref{ParsevalForPandQ}) that one must have
both
\begin{equation*}
\int_0^1 \left( \lambda\phi(1) K(x,1) - \phi(x)\right) \phi_h (x) dx
= - \left( 1 + \frac{\lambda_h}{\lambda} \right) a_h
\quad\text{($h\in{\Bbb N}$)}
\end{equation*}
and the corresponding Parseval identity:
\begin{equation*}
\sum_{h=1}^{\infty} \left( 1 + \frac{\lambda_h}{\lambda} \right)^2 a_h^2
= \int_0^1 \left( \lambda\phi(1) K(x,1) - \phi(x)\right)^2 dx .
\end{equation*}
Since $K$ is a symmetric kernel, the last integral above may be evaluated by
expansion of the integrand, followed by term by term integration and the
application of \eqref{K2symmetrically}, \eqref{DefEigenfunction}
and the orthonormality of $\phi_1(x),\phi_2(x),\phi_3(x),\ldots $ :
we thereby obtain the result stated in \eqref{ParsevalWithTapering}.
\end{proof}
\section{Asymptotics as $x\rightarrow 0+$}
Throughout this section we assume, as in the preceding section,
that $\phi$ is one of the eigenfunctions in the sequence $\phi_1,\phi_2,\phi_3,\ldots\ $,
and that $\lambda$ is the corresponding eigenvalue of the kernel $K$.
\begin{lemma}
Let $x\in (0,1]$ be such that $\phi'(x)$ exists. Then one has
\begin{equation*}
x\phi'(x) =
-\lambda \phi (1) \widetilde B_1 \left( \frac{1}{x}\right)
-\lambda^2 \phi(1) K_2 (x , 1)
+\lambda^2 \int_0^1 K_2 (x , z) z\phi'(z) dz\;.
\end{equation*}
\end{lemma}
\begin{proof} By Theorem~4.13 and Definitions~4.12, followed by
\eqref{DefEigenfunction} and Definitions~2.1, we have:
\begin{multline*}
x\phi'(x) = -\phi(x) + \lambda \phi(1) K(x , 1) -\lambda\int_0^1 K(x,y) y\phi'(y) dy \\
\begin{aligned}
&= -\phi(x) + \lambda \phi(1) K(x , 1) \\
&\phantom{{=}}\ \, - \lambda\int_0^1 K(x,y)
\left( -\phi(y) + \lambda \phi(1) K(y , 1) -\lambda\int_0^1 K(y , z) z\phi'(z) dz \right) dy \\
&= -\phi(x) + \lambda \phi(1) K(x , 1) \\
&\phantom{{=}}\ \, + \phi (x) - \lambda^2 \phi(1) K_2 (x , 1)
+ \lambda^2 \int_0^1 K(x,y) \left( \int_0^1 K(y , z) z\phi'(z) dz \right) dy\;.
\end{aligned}
\end{multline*}
The lemma therefore follows by observing that $K(x , 1) = - \widetilde B_1 (1/x)$, by \eqref{KtoTildeB1},
that $-\phi(x) +\phi(x) =0$,
and that, by virtue of Theorem~4.11, an application of Fubini's Theorem
\cite[Theorem~6.1]{Wheeden and Zygmund 2015} gives:
\begin{multline*}
\int_0^1 K(x,y) \left( \int_0^1 K(y , z) z\phi'(z) dz \right) dy \\
= \int_0^1 \left( \int_0^1 K(x,y) K(y , z) dy \right) z\phi'(z) dz
= \int_0^1 K_2 (x , z) z\phi'(z) dz
\end{multline*}
(the last equality following from Definitions~2.1).
\end{proof}
\begin{definitions}
We define
\begin{equation}\label{DefI_0}
I_0 (x , y) = \int_0^x K_2 (z , y) dz
\qquad\text{($0\leq x,y\leq 1$)} .
\end{equation}
For $0<x,y\leq 1$ and $0\leq w\leq 1$, we put:
\begin{equation}\label{DefI}
I(x , y ; w) = \int_x^y \frac{K_2 (z , w) dz}{z^2}\;.
\end{equation}
\end{definitions}
\begin{lemma}
One has
\begin{equation}
I_0 (x , y) \ll \left( 1 + \log\frac{1}{y}\right) y
\qquad\text{($0<x,y\leq 1$)} .
\end{equation}
\end{lemma}
\begin{proof}
Let $x,y\in(0,1]$. By \eqref{DefI_0} and \eqref{K2BoundOffDiagonal}, we find that
\begin{align*}
\left| I_0 (x , y)\right|\leq \int_0^1 \left| K_2 (z , y)\right| dz
&=\int_0^y O\left( \frac{z}{y}\right) dz + \int_y^1 O\left( \frac{y}{z}\right) dz \\
&= O\left( y\right) + O\left( y\log\frac{1}{y}\right) \;,
\end{align*}
as required.
\end{proof}
\begin{theorem}
Let $x\in (0,1]$ be such that $\phi' (x)$ exists. Then one has
\begin{equation*}
x \phi' (x) = - \lambda \phi(1) \widetilde B_1 \left( \frac{1}{x}\right)
+ O\left( |\lambda|^5 \left( \log |\lambda|\right)^2 \left( 1 + \log\frac{1}{x}\right) x\right) \;.
\end{equation*}
\end{theorem}
\begin{proof}
In view the bounds \eqref{PhiBoundedUniformly} and \eqref{SpectrumBound},
the theorem will follow from Lemma~5.1, once it is shown that one has both
$K_2 (x , 1)\ll x$ and
\begin{equation*}
\int_0^1 K_2 (x , z) z\phi'(z) dz
\ll |\lambda|^3 \left( \log |\lambda|\right)^2 \left( 1 + \log\frac{1}{x}\right) x\;.
\end{equation*}
The first of these two estimates is contained in \eqref{K2BoundOffDiagonal}.
The other follows by noting that one has
$\int_0^1 K_2 (x , z) z\phi'(z) dz
\ll |\lambda|^3 \left( \log |\lambda|\right)^2 \int_0^1 \left| K_2 (z , x)\right| dz$
(by Theorems~4.9 and 4.11, and the symmetry of $K_2$) and recalling that,
in our proof of Lemma~5.3, we found that
$\int_0^1 \left| K_2 (z , x)\right| dz =O\left( x + x\log(1/x)\right)$.
\end{proof}
\begin{remarks}
Since $\int_0^1 \left| 1 + \log (1/x)\right| dx = \int_0^1 \left( 1 + \log (1/x)\right) dx
=2 < \infty$, while
$\int_0^1 \bigl| \widetilde B_1 (1/x)\bigr| x^{-1} dx = \int_1^{\infty} \bigl| \widetilde B_1 (t)\bigr| t^{-1} dt
\geq\sum_{n=1}^{\infty} \int_n^{n+1/3} \frac{1}{6} t^{-1} dt =\infty$,
it is therefore a corollary of Theorems~4.11 and 5.4 that $\phi'(x)$ is Lebesgue integrable
on $[0,1]$ if and only if $\phi(1)=0$. Thus if $\phi'(x)$ is Lebesgue integrable on $[0,1]$
then, by \eqref{phi(1)asIntegral}, one has $\int_0^1 \phi' (x) dx = \phi(1) = 0$.
\end{remarks}
\begin{lemma}
For $0<x,y\leq 1$, one has:
\begin{equation*}
I_0 (x , y) = - {\textstyle\frac{1}{6}} \left( \frac{x}{y}\right)^{\!\!3 }
\sum_{m>\frac{1}{y}} \frac{\widetilde B_3 \left( \frac{my}{x}\right)}{m^3}
+O\left( \frac{(x+y)x^3}{y}\right)
\ll \frac{x^3}{y}\;,
\end{equation*}
where $\widetilde B_3 (t) := \{ t\}^3 - {\textstyle\frac{3}{2}} \{ t\}^2 + {\textstyle\frac{1}{2}} \{ t\}\,$
(the third periodic Bernouilli function).
\end{lemma}
\begin{proof}
Let $0<x,y\leq 1$. Define functions $f_1,\ldots ,f_4$ on $[0,1]$ by putting
$f_j (0) = 0$ ($j=1,\ldots ,4$) and
\begin{equation*}
f_1(z) = {\textstyle\frac{1}{2}} \widetilde B_1 \left( \frac{1}{y}\right) z \widetilde B_2 \left( \frac{1}{z}\right) \;,
\qquad
f_2(z) = \frac{1}{z} \int_{\frac{1}{z}}^{\infty} \widetilde B_2 (t)
\widetilde B_1 \left( \frac{t z}{y}\right) \frac{dt}{t^3}
\end{equation*}
\begin{equation*}
f_3(z) = \frac{1}{2y} \int_{\frac{1}{z}}^{\infty} \widetilde B_2 (t) \frac{dt}{t^2}
\quad\text{and}\quad
f_4(z) = \frac{z}{2y^2}
\sum_{m > \frac{1}{y}} \frac{\widetilde B_2 \left( \frac{my}{z}\right)}{m^2}\;,
\end{equation*}
for $0<z\leq 1$.
If these four functions are Lebesgue integrable on $[0,1]$
then, by \eqref{DefI_0} and Lemma~2.3, we will have:
\begin{equation}\label{Proof-s7A*1}
I_0 (x,y) = \sum_{j=1}^4 (-1)^j \int_0^x f_j(z) dz = \sum_{j=1}^4 (-1)^j I_j\quad\text{(say)} .
\end{equation}
\par
The function $\widetilde B_2$ is continuous and periodic on ${\mathbb R}$, and is therefore bounded.
It follows that $f_1$ and $f_3$ are continuous on $(0,1]$. For $0<z\leq 1$, one has
$f_1(z) =O(z)$ and $f_3 (z) = \frac{1}{2} y^{-1} \int_{1/x}^{\infty} O\left( t^{-2}\right) dt = O(x/y)$,
and so $f_1(z)$ and $f_3(z)$ are also continuous at the point $z=0$.
Thus $f_1$ and $f_3$ are continuous on $[0,1]$. Similarly, each
term in the infinite series $\sum_{m>1/y} m^{-2} z \widetilde B_2 (my/z)$ is
continuous (as a function of $z$) on the interval $(0,1]$, and tends
to the limit~$0$ as $z\rightarrow 0+$. Since this series
sums to $2 y^2 f_4(z)$, and converges uniformly for $0 < z\leq 1$, we may
conclude that the function $f_4$ is continuous on $[0,1]$.
By Lemma~2.3, Definitions~2.1 and \eqref{DefK},
we have $\sum_{j=1}^4 (-1)^j f_j(z) = K_2(z,y)\,$ ($0\leq z\leq 1$).
Therefore, since $f_1$, $f_3$ and $f_4$ are continuous on $[0,1]$,
and since the same is true of the function $z\mapsto K_2 (z,y)\,$
(see Corollary~2.8), we deduce that the function $f_2$ is continuous on $[0,1]$.
Thus the functions $f_1,\ldots ,f_4$ are integrable on $[0,1]$,
since they are continuous on this interval.
We therefore do have \eqref{Proof-s7A*1}.
\par
We shall complete the proof of the lemma by estimating
the integrals $I_1,\ldots ,I_4$.
We note, firstly, that
\begin{equation*}
I_1 = {\textstyle\frac{1}{2}} \widetilde B_1 \left( \frac{1}{y}\right)
\int_0^x z \widetilde B_2 \left( \frac{1}{z}\right) dz =
{\textstyle\frac{1}{2}} \widetilde B_1 \left( \frac{1}{y}\right)
\int_{\frac{1}{x}}^{\infty} t^{-3} \widetilde B_2 (t) dt
\end{equation*}
(by means of the substitution $z=1/t$).
Since $\widetilde B_3$ is bounded, and
satisfies $\frac{d}{dt} \widetilde B_3 (t) = 3 \widetilde B_2 (t)\,$ ($t\in{\mathbb R}$),
we find (through integration by parts) that
\begin{align*}
\int_{\frac{1}{x}}^{\infty} t^{-3} \widetilde B_2 (t) dt
&= - {\textstyle\frac{1}{3}} x^3 \widetilde B_3\left( \frac{1}{x}\right)
+ \int_{\frac{1}{x}}^{\infty} t^{-4} \widetilde B_3 (t) dt \\
&= O\left( x^3\right) + \int_{\frac{1}{x}}^{\infty} O\left( t^{-4}\right) dt
\ll x^3\;.
\end{align*}
It follows, since $\bigl| \widetilde B_1 (1/y)\bigr| \leq \frac{1}{2}$,
that we have:
\begin{equation}\label{Proof-s7A*2}
I_1 = O\left( x^3\right) \;.
\end{equation}
\par
Using the substitution $t=(zw)^{-1}$, we find that
\begin{equation*}
f_2 (z) = \int_0^1 zw \widetilde B_2 \left( \frac{1}{zw}\right) B_1 \left( \frac{1}{yw}\right) dw\;.
\end{equation*}
By this and Fubini's theorem, we have
\begin{align*}
I_2 = \int_0^x f_2 (z) dz &=
\int_0^1 \left( \int_0^x z \widetilde B_2 \left( \frac{1}{zw}\right) dz\right) w B_1 \left( \frac{1}{yw}\right) dw \\
&= \int_0^1 \left( \int_0^{wx} u \widetilde B_2 \left( \frac{1}{u}\right) du\right)
w^{-1} B_1 \left( \frac{1}{yw}\right) dw \;.
\end{align*}
Therefore, by a calculation similar to that which gave us \eqref{Proof-s7A*2},
we obtain:
\begin{equation} \label{Proof-s7A*3}
I_2 = \int_0^1 O\left ( (wx)^3\right) \cdot w^{-1} B_1 \left( \frac{1}{yw}\right) dw
=\int_0^1 O\left( x^3 w^2\right) dw \ll x^3\;.
\end{equation}
\par
Regarding $I_3$ (and $f_3(z)$), we note that integration by parts (twice) gives
\begin{align*}
\int_{\frac{1}{z}}^{\infty} \widetilde B_2 (t) \frac{dt}{t^2}
&= - {\textstyle\frac{1}{3}} z^2 \widetilde B_3 \left( \frac{1}{z}\right)
+ {\textstyle\frac{2}{3}} \int_{\frac{1}{z}}^{\infty} \widetilde B_3 (t) \frac{dt}{t^3} \\
&= - {\textstyle\frac{1}{3}} z^2 \widetilde B_3 \left( \frac{1}{z}\right)
- {\textstyle\frac{1}{6}} z^3 \widetilde B_4 \left( \frac{1}{z}\right)
+ {\textstyle\frac{1}{2}} \int_{\frac{1}{z}}^{\infty} \widetilde B_4 (t) \frac{dt}{t^4} \\
&= - {\textstyle\frac{1}{3}} z^2 \widetilde B_3 \left( \frac{1}{z}\right)
+O\left( z^3\right) \qquad\text{($0<z\leq 1$)} ,
\end{align*}
since the periodic Bernouilli function $\widetilde B_4$ is bounded.
Note also that one has
$\int_0^x z^2 \widetilde B_3 (1/z) dz = \int_{1/x}^{\infty} t^{-4} \widetilde B_3 (t) dt$
(by the substitution $z=1/t$),
and so we find (similarly to the above calculation) that one has
\begin{equation*}
\int_0^x z^2 \widetilde B_3 \left( \frac{1}{z}\right) dz
= - {\textstyle\frac{1}{4}} x^4 \widetilde B_4 \left( \frac{1}{x}\right) + O\left( x^5\right)
\ll x^4\;.
\end{equation*}
By the preceding observations, we have
\begin{equation}\label{Proof-s7A*4}
2y I_3 =\int_0^x 2y f_3(z) dz
= -{\textstyle\frac{1}{3}} \int_0^ x z^2\widetilde B_3 \left( \frac{1}{z}\right) dz
+ \int_0^x O\left( z^3\right) dz \ll x^4\;.
\end{equation}
\par
Turning, lastly, to $I_4$, we note that, since
the series $\sum_{m>1/y} m^{-2} z \widetilde B_2 (my/z)$ is uniformly convergent
for $0<z\leq x$, we may integrate term-by-term to get:
\begin{equation*}
I_4 = \int_0^x f_4 (z) dz
= \sum_{m > \frac{1}{y}} \frac{1}{2 y^2 m^2} \int_0^x
z \widetilde B_2 \left( \frac{my}{z}\right) dz\;.
\end{equation*}
Using the substitution $z=my/t$, followed by integration by parts (twice), one finds that
when $m>0$ one has:
\begin{align*}
\frac{1}{y^2 m^2}\int_0^x z \widetilde B_2 \left(\frac{my}{z}\right) dz
&= \int_{\frac{my}{x}}^{\infty} t^{-3} \widetilde B_2 (t) dt \\
&= - \sum_{r=3}^4\frac{1}{r} \left( \frac{my}{x}\right)^{\!-r} \widetilde B_r\left( \frac{my}{x}\right)
+ \int_{\frac{my}{x}}^{\infty} t^{-5} \widetilde B_4 (t) dt \\
&= - \frac{1}{3} \left( \frac{my}{x}\right)^{\!-3} \widetilde B_3\left( \frac{my}{x}\right)
+ O\left( \left( \frac{my}{x}\right)^{\!-4}\right) \;.
\end{align*}
Thus, since $\sum_{m>1/y} m^{-4} = O\left( y^3\right)$, we get:
\begin{equation}\label{Proof-s7A*5}
I_4 = - {\textstyle\frac{1}{6}} \left( \frac{x}{y}\right)^{\!\!3}
\sum_{m>\frac{1}{y}} m^{-3} \widetilde B_3\left( \frac{my}{x}\right)
+ O\left( \frac{x^4}{y}\right) \;.
\end{equation}
\par
By \eqref{Proof-s7A*1}--\eqref{Proof-s7A*5}, we conclude that
\begin{equation*}
I_0 (x,y) = - {\textstyle\frac{1}{6}} \left( \frac{x}{y}\right)^{\!\!3}
\sum_{m>\frac{1}{y}} m^{-3} \widetilde B_3\left( \frac{my}{x}\right)
+ O\left( \frac{x^4}{y}\right) + O\left( x^3\right) \;.
\end{equation*}
The lemma follows, since $\sum_{m>1/y} m^{-3} \widetilde B_3 (my/x) \ll \sum_{m>1/y} m^{-3} \ll y^2$
and $O\left( x^4 / y\right) + O\left( x^3\right) \ll (x+y) x^3 / y \leq 2 x^3 / y$.
\end{proof}
As a corollary of Lemmas~5.3 and 5.5, we obtain the following lemma
concerning the integral $I(x,y;w)$ that we have defined in \eqref{DefI}.
\begin{lemma} Let $0<x,y\leq 1$. Then
\begin{equation}\label{IxywBound}
I(x,y;w)\ll \min\left\{ \frac{x+y}{w} \,,\, \left( \frac{1}{x^2} + \frac{1}{y^2}\right)
\left( 1 + \log\frac{1}{w}\right) w\right\}
\end{equation}
for $0<w\leq 1$, and one has
\begin{equation}\label{IxywL1norm}
\int_0^1 \left| I(x,y;w)\right| dw \ll \left( 1 + \log\frac{1}{xy}\right) (x + y)\;.
\end{equation}
\end{lemma}
\begin{proof}
Let $w\in (0,1]$. Given the definitions \eqref{DefI} and \eqref{DefI_0},
we find, using integration by parts, that one has
\begin{equation*}
I(x,y;w) = y^{-2} I_0 (y , w) - x^{-2} I_0 (x, w) + \int_x^y 2 z^{-3} I_0 (z , w) dz\;.
\end{equation*}
By Lemma~5.5 each term of form $I_0 (u, w)$ occurring in the last equation
is of size $O(u^3 / w)$. Thus we find that
$I(x,y;w) = O(y / w) - O( x / w) + \int_x^y O(1 / w) dz \ll (x+y)/w$.
By using Lemma~5.3, in place of Lemma~5.5, one obtains the different estimate:
\begin{align*}
I(x,y;w) &= O\left( y^{-2} \left( 1 + \log\frac{1}{w}\right) w \right)
- O\left( x^{-2}\left( 1 + \log\frac{1}{w}\right) w \right) \\
&\phantom{{=}} \ \, + \int_x^y O\left( z^{-3} \left( 1 + \log\frac{1}{w}\right) w \right) dz \\
&\ll \left( \frac{1}{y^2} + \frac{1}{x^2}\right) \left( 1 + \log\frac{1}{w}\right) w\;.
\end{align*}
This completes the proof of \eqref{IxywBound}.
\par
We now put
\begin{equation*}
\delta = \frac{xy}{\sqrt{x+y}} \;,
\end{equation*}
so that $0<\delta <x\sqrt{y}\leq 1$ and $1/\delta < 2/(xy)$.
By \eqref{IxywBound}, we have
\begin{align*}
\int_0^1 \left| I(x,y;w)\right| dw
&= \int_0^{\delta} \left| I(x,y;w)\right| dw + \int_{\delta}^1 \left| I(x,y;w)\right| dw \\
&\ll \left( \frac{1}{x^2} + \frac{1}{y^2}\right) \int_0^{\delta} \left( 1 + \log\frac{1}{w}\right) w dw
+ (x + y) \int_{\delta}^1 \frac{dw}{w} \\
&= \left( \frac{y^2 + x^2}{x^2 y^2} \right)
\delta^2 \left( {\textstyle\frac{3}{4}} + {\textstyle\frac{1}{2}}\log\frac{1}{\delta}\right)
+ (x + y)\log\frac{1}{\delta} \\
&\ll \left( \frac{x^2 + y^2}{x+y} + x + y\right) \left( 1 + \log\frac{1}{xy}\right) \;.
\end{align*}
The result \eqref{IxywL1norm} follows.
\end{proof}
\begin{definitions}
We define
\begin{equation}\label{DefPhi_0}
\Phi_0 (x) = \int_0^x \phi(y) dy\qquad\text{($0\leq x\leq 1$)} .
\end{equation}
For $0<x,y\leq 1$ and $\sigma < 3$, we put
\begin{equation}\label{DefPhixysigma}
\Phi (x , y ; \sigma) = \int_x^y \frac{\phi (z) dz}{z^{\sigma}} \;.
\end{equation}
\end{definitions}
\begin{theorem}
One has
\begin{equation*}
\Phi_0 (x) \ll |\lambda|^3 x^3 \min\left\{ \lambda^2 \,,\, 1 + \log\frac{1}{x}\right\}
\qquad\text{($0<x\leq 1$)} .
\end{equation*}
\end{theorem}
\begin{proof}
Let $0<x\leq 1$. By \eqref{DefPhi_0}, \eqref{K2Eigenfunction} and Fubini's theorem for double integrals,
it follows that one has
\begin{equation}\label{Proof-s7B1}
\Phi_0 (x) = \int_0^x \left( \lambda^2 \int_0^1 K_2 (y , z) \phi(z) dz\right) dy
=\lambda^2 \int_0^1 I_0 (x , z) \phi(z) dz\;,
\end{equation}
where $I_0 (x , z)$ is given by \eqref{DefI_0}.
By \eqref{Proof-s7B1}, Theorem~3.2 and Lemma~5.5, we have
\begin{equation}\label{Proof-s7B2}
\Phi_0 (x) =\lambda^2 \int_0^1 O\left( x^3 z^{-1}\right) \cdot
O\left( |\lambda|^3 z\right) dz
\ll |\lambda|^5 x^3\;.
\end{equation}
\par
We now put:
\begin{equation*}
\delta = x^{3/2}\;,
\end{equation*}
so that $0<\delta\leq 1$. We observe that, by Lemmas~5.3 and 5.5,
one has
\begin{align*}
\int_0^1 \left| I_0 (x,z)\right| dz &= \int_0^{\delta} \left| I_0 (x,z)\right| dz
+ \int_{\delta}^1 \left| I_0 (x,z)\right| dz \\
&=\int_0^{\delta} \left( 1 + \log\frac{1}{z}\right) z dz
+ \int_{\delta}^1 x^3 z^{-1} dz \\
&= \left( {\textstyle\frac{3}{4}} + {\textstyle\frac{1}{2}}\log\frac{1}{\delta}\right) \delta^2
+ x^3 \log\frac{1}{\delta} \ll \left( 1 + \log\frac{1}{x}\right) x^3 \;.
\end{align*}
By this, \eqref{PhiBoundedUniformly} and \eqref{Proof-s7B1}, it follows that
\begin{equation}\label{Proof-s7B3}
\Phi_0 (x) \ll |\lambda|^3 \int_0^1 \left| I_0 (x,z)\right| dz
\ll |\lambda|^3 \left( 1 + \log\frac{1}{x}\right) x^3\;.
\end{equation}
The combination of \eqref{Proof-s7B2} and \eqref{Proof-s7B3} implies the theorem.
\end{proof}
\begin{corollary}
Let $\sigma < 3$. Then one has
\begin{equation*}
\Phi (x,y;\sigma) \ll_{\sigma} |\lambda|^3 (x+y)^{3-\sigma}
\min\left\{ \lambda^2 \,,\, 1 + \log\frac{1}{xy} \right\}
\qquad\text{($0<x,y\leq 1$)} .
\end{equation*}
\end{corollary}
\begin{proof}
Let $x,y\in (0,1]$. It follows from \eqref{DefPhixysigma}, by integration by parts,
that
\begin{equation*}
\Phi(x,y;\sigma)
= y^{-\sigma} \Phi_0 (y) - x^{-\sigma} \Phi_0 (x) + \sigma \int_x^y z^{-\sigma - 1} \Phi_0 (z) dz\;,
\end{equation*}
where $\Phi_0 (z)$ is given by \eqref{DefPhi_0}. By this and Theorem~5.8, we have both
\begin{align*}
\Phi(x,y;\sigma) &\ll |\lambda|^5 \left( y^{3-\sigma} + x^{3-\sigma}
+\left| \sigma \int_x^y z^{2-\sigma} dz\right|\right) \\
&= |\lambda|^5 \left( y^{3-\sigma} + x^{3-\sigma}
+ \frac{|\sigma|}{(3-\sigma)} \left| y^{3-\sigma} - x^{3-\sigma}\right|\right) \\
&\ll_{\sigma} |\lambda|^5 (x+y)^{3-\sigma}
\end{align*}
and
\begin{equation*}
\Phi(x,y;\sigma) \ll_{\sigma} |\lambda|^5 (x+y)^{3-\sigma}
\cdot \lambda^{-2} \left( 1 + \log\frac{1}{x} + \log\frac{1}{y}\right) \;.
\end{equation*}
The last two estimates imply the corollary.
\end{proof}
\begin{lemma}
One has
\begin{align*}
\frac{\phi(y)}{y} - \frac{\phi(x)}{x} &= {\textstyle\frac{1}{2}} \lambda \phi(1)
\left( \widetilde B_2 \left( \frac{1}{y}\right) - \widetilde B_2 \left( \frac{1}{x}\right)\right) \\
&\phantom{{=}} \ \, + O\left( |\lambda|^5 \left( \log |\lambda|\right)^2
(x+y)\left( 1 + \log\frac{1}{xy}\right)\right)
\end{align*}
for $0<x,y\leq 1$.
\end{lemma}
\begin{proof}
Let $0<x<y\leq 1$.
Recalling our Remarks following Lemma~3.4, we
note (in particular) that the function $\phi$ satisfies a uniform Lipschitz condition
of order $1$ on the interval $[x,y]$, and is (therefore) absolutely continuous on
this interval. Since the same is true of the function $z\mapsto z^{-1}$, it
follows that the function $z\mapsto z^{-1} \phi(z)$ satisfies a uniform Lipschitz
condition of order $1$ on $[x,y]$, and so (like $\phi$) is absolutely
continuous on this interval. Therefore
\begin{align*}
\frac{\phi(y)}{y} - \frac{\phi(x)}{x}
&= \int_x^y \left( \frac{d}{dz}\left( \frac{\phi(z)}{z}\right) \right) dz \\
&= \int_x^y \left( \frac{\phi'(z)}{z} - \frac{\phi(z)}{z^2} \right) dz
= \int_x^y \frac{\phi'(z) dz}{z} - \int_x^y \frac{\phi(z)dz}{z^2} dz\;.
\end{align*}
The last of the above integrals is $\Phi (x,y;2)\,$ (see the Definitions~5.7).
Thus, by Corollary~5.9, we have
\begin{equation}\label{Proof-s7C1}
\frac{\phi(y)}{y} - \frac{\phi(x)}{x}
= \int_x^y \frac{\phi'(z) dz}{z} + O\left( |\lambda|^5 y\right) \;.
\end{equation}
\par
By Lemma~5.1 and a simple substitution, one has
\begin{align}\label{Proof-s7C2}
\int_x^y \frac{\phi'(z) dz}{z}
&= \int_x^y \biggl( -\lambda \phi (1) z^{-2} \widetilde B_1 \left( \frac{1}{z}\right)
-\lambda^2 \phi(1) z^{-2} K_2 (z , 1) \nonumber\\
&\phantom{{=}} \ \, +\lambda^2 z^{-2} \int_0^1 K_2 (z , w) w\phi'(w) dw \biggr) dz \nonumber\\
&= \lambda \phi (1) \int_{\frac{1}{x}}^{\frac{1}{y}} \widetilde B_1 (t) dt
- \lambda^2 \phi(1) I(x,y;1)
+ \lambda^2 J(x,y)\;,
\end{align}
where
\begin{equation*}
J(x,y) := \int_x^y \left( \int_0^1 z^{-2} K_2 (z , w) w\phi'(w) dw \right) dz\;,
\end{equation*}
while $I(x,y;w)$ is as defined in \eqref{DefI}. In view of Theorems~2.7 and~4.11,
it follows by Fubini's theorem for double integrals that we have here:
$J(x,y) = \int_0^1 I(x,y;w) \cdot w\phi'(w) dw$.
By this, together with Theorem~4.9 and the estimate
\eqref{IxywL1norm} of Lemma~5.6, we find that
\begin{align}\label{Proof-s7C3}
J(x,y) &\ll |\lambda|^3 \left(\log |\lambda|\right)^2 \int_0^1 \left| I(x,y;w)\right| dw \nonumber\\
&\ll |\lambda|^3 \left(\log |\lambda|\right)^2\left( 1 + \log\frac{1}{x}\right) y\;.
\end{align}
\par
By \eqref{PhiBoundedUniformly} and the estimate \eqref{IxywBound} of Lemma~5.6,
we have $\phi(1) I(x,y;1)\ll |\lambda| y$.
This, combined with \eqref{Proof-s7C1}, \eqref{Proof-s7C2} and \eqref{Proof-s7C3},
shows that
\begin{equation*}
\frac{\phi(y)}{y} - \frac{\phi(x)}{x}
= \lambda \phi (1) \int_{\frac{1}{x}}^{\frac{1}{y}} \widetilde B_1 (t) dt
+ O\left( |\lambda|^5 \left( \lambda^{-2} + \log^2 |\lambda|\right)
\left( 1 + \log\frac{1}{x}\right) y\right) \;.
\end{equation*}
\par
This completes our proof of those cases of the lemma in which one has
$0<x<y\leq 1$: for one has
$\int_a^b \widetilde B_1 (t) dt = \frac{1}{2} \widetilde B_2 (b) - \frac{1}{2} \widetilde B_2 (a)\,$
($a,b\in{\mathbb R}$), and we know (see \eqref{SpectrumBound}) that $|\lambda|\geq 2$,
so that one has $\log^2 |\lambda| \gg 1 \gg \lambda^{-2}$.
The cases where $0<y<x\leq 1$ follow trivially from the cases just established,
since swapping $x$ for $y$ in the equation occurring in the statement of the lemma
has the same effect as multiplying both sides of that equation by $-1$.
The remaining cases of the lemma (those where $x=y$) are trivially valid.
\end{proof}
\begin{theorem}
One has
\begin{equation*}
\frac{\phi(x)}{x} = {\textstyle\frac{1}{2}} \lambda \phi(1) \widetilde B_2 \left( \frac{1}{x}\right)
+ O\left( |\lambda|^5 \left( \log |\lambda|\right)^2 x \left( 1 + \log\frac{1}{x}\right)\right)
\end{equation*}
for $0<x\leq 1$.
\end{theorem}
\begin{proof}
Let $0<x\leq 1$. By Lemma~5.10, we have
\begin{multline*}
\frac{\phi(x)}{x} - {\textstyle\frac{1}{2}} \lambda \phi(1) \widetilde B_2 \left( \frac{1}{x}\right) \\
= \frac{\phi(y)}{y} - {\textstyle\frac{1}{2}} \lambda \phi(1) \widetilde B_2 \left( \frac{1}{y}\right)
+ O\left( |\lambda|^5 \left( \log |\lambda|\right)^2
x\left( 1 + \log\frac{1}{y}\right)\right)
\end{multline*}
for $0<y\leq x$. By multiplying both sides of the last equation by $y$, and then
integrating (with respect to $y$) over the interval $(0,x]$, we deduce that
\begin{align}\label{Proof-s7D1}
{\textstyle\frac{1}{2}} x\phi (x)
- {\textstyle\frac{1}{4}} \lambda \phi(1)x^2\widetilde B_2 \left( \frac{1}{x}\right)
&=\Phi_0 (x)
- {\textstyle\frac{1}{2}} \lambda \phi(1) \int_0^x y \widetilde B_2 \left( \frac{1}{y}\right) dy \nonumber\\
&\phantom{{=}}\ + O\left( |\lambda|^5 \left( \log |\lambda|\right)^2 x^3 \left( 1 + \log\frac{1}{x}\right) \right) ,
\end{align}
where $\Phi_0 (x)$ is given by \eqref{DefPhi_0}.
We recall, from our treatment of the integral $I_1$ in the proof of Lemma~5.5,
that one has here
$\int_0^x y \widetilde B_2 ( 1/y) dy \ll x^3$.
By Theorem~5.8, we have $\Phi_0 (x) \ll |\lambda|^5 x^3$.
Given \eqref{PhiBoundedUniformly} and
\eqref{SpectrumBound}, it follows from the last two observations that the entire right-hand side
of equation \eqref{Proof-s7D1} is of size
$O\left( |\lambda|^5 \left( \log |\lambda|\right)^2 x^3 \left( 1 + \log\frac{1}{x}\right) \right)$.
Thus we obtain an estimate,
\begin{equation*}
\left(\frac{\phi (x)}{x}
- {\textstyle\frac{1}{2}} \lambda \phi(1)\widetilde B_2 \left( \frac{1}{x}\right)\right)
\cdot {\textstyle\frac{1}{2}} x^2
\ll |\lambda|^5 \left( \log |\lambda|\right)^2 x^3 \left( 1 + \log\frac{1}{x}\right) ,
\end{equation*}
from which the required result follows.
\end{proof}
\begin{remarks}
It follows from Theorem~5.11 that the righthand derivative $\phi_{+}' (0)$ is equal to $0$
if $\phi(1)=0$, but does not exist if $\phi(1)\neq 0$.
\end{remarks} | 0.0082 |
\section{From omniscience to homogeneous sets}\label{section:3LLPOimpliesRT2n}
Let $k \geq 2$ be a fixed natural number. We modify Erd\H{o}s-Rado's proof of $\rt^2_k$ (see e.g. \cite{KohlKreuz1}) to obtain a proof of $\koenig \implies \rt^2_k(\Sigma^0_0)$ over $\ha$. It is enough to prove that if $\{c_a \mid a \in \NN\}$ is a recursive family of recursive colorings, a finite number of statements in $\koenig$ imply that there are predicates $C_0(.,c), \dots, C_{k-1}(.,c)$ such that,
\[\forall a ( \bigvee \bp{ C_i(., c_a) \mbox{ is infinite and homogeneous }\mid i < k}).\]
We first sketch Erd\H{o}s-Rado's proof of $\rt^2_k$. It consists in defining a suitable infinite $k$-ary tree $V$. We first remark that $\rt^1_k$ (Ramsey's Theorem for colors and points of $\NN$) is nothing but the Pigeonhole Principle: indeed, if we have a partition of $\NN$ into $k$-many colors, then one of these classes is infinite. We informally prove now $\rt^2_k$ from $\rt^1_k$. Fix any coloring $f:[\NN]^2 \rightarrow k$ of all edges of the complete graph having support $\NN$. If $X$ is any subset of $\NN$, we say that $X$ defines a $1$-coloring of $X$ if for all $x \in X$, any two edges from $x$ to some $y$, $z$ in $X$ have the same color. If $X$ is infinite and defines a $1$-coloring, then, by applying $\rt^1_k$ to $X$ we produce an infinite subset $Y$ of $X$ whose points all have the same color $h$. According to the way we color points, all edges from all points of $X$ all have the color $h$. Thus, a sufficient condition for $\rt^2_k$ is the existence of an infinite set defining a $1$-coloring. In fact we need even less. Assume that $V$ is a graph whose ancestor relation is included in the complete graph $\NN$. We say that $V$ is an Erd\H{o}s' tree in $k$ colors (e.g. \cite[Definition 6.3]{Hclosure}) if for all $x \in V$, all $i=1, \dots, k$ all descendants $y$, $z$ of the child number $i$ of $x$ in $V$, the edges $x$ to $y$, $z$ have the same color number $i$. There is some Erd\H{o}s' tree recursively enumerable in the coloring (e.g. \cite{KohlKreuz1, RT22iff3LLPO}). Assume there exists some infinite $k$-ary Erd\H{o}s' tree $V$. Then $V$ has some infinite branch $r$ by K\"{o}nig's Lemma. $r$ is a total order in $V$, therefore $r$ is a complete sub-graph of $\NN$. Thus, $r$ defines an infinite $1$-coloring and proves $\rt^2_k$. Therefore a sufficient condition for $\rt^2_k$ is the existence of an infinite $k$-ary tree Erd\H{o}s' tree $V$.
In \cite{Jockusch} Jockusch presented a modified version of Erd\H{o}s-Rado proof. Erd\H{o}s-Rado's proof, Jockusch's proof and our proof differ in the definition of $V$, although until this point they are the same. Erd\H{o}s and Rado introduce an ordering relation $\prec_E$ on $\NN$ which defines the proper ancestor relation of a $k$-ary tree $E$ on $\NN$. The $k$-coloring on edges of $\NN$, restricted to the set of pairs $x \prec_E y$, gives the same color to any two edges $x \prec_E y$ and $x \prec_E z$ with the same origin $x$. This defines an Erd\H{o}s' tree over $\NN$. In both Erd\H{o}s-Rado and Jockusch's proofs, an infinite homogeneous set is obtained from an infinite set of nodes of the same color in an infinite branch of the tree. In Erd\H{o}s-Rado and Jockusch's proofs, the Pigeonhole Principle is applied to a $\Delta^0_3$-branch obtained by K\"{o}nig's Lemma. To formalize these proofs in $\ha$ we would have to use the classical principle $\llpo{4}$: the Pigeonhole Principle for $\Delta^0_3$ predicates requires $\llpo{4}$. Our goal is to prove $\rt^2_k(\Sigma^0_0)$ using the weaker principle $\koenig$.
\begin{proposition}[$\ha + \Em_2$]\label{proposition: Erdossubtree}
For every $k \geq 2$ and for every recursive coloring $c_a: [\NN]^2 \to k$, there exists an Erd\H{o}s' tree $T$ for the coloring $c_a$ which satisfies the following properties:
\begin{itemize}
\item[{\normalfont(a)}] there exists some $\Delta^0_2$ predicate in $\ha$ which represents $x \in T$;
\item[{\normalfont(b)}] $T$ has a unique infinite branch $r$ defined by some predicate of $\ha$;
\item[{\normalfont(c)}] if there exist infinitely many edges with color $h$ in $T$, then there are infinitely many edges with color $h$ in $r$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $k\geq 2$. The standard Erd\H{o}s' tree $(\NN, \prec_E)$ associated to a coloring $c_a : [\NN]^2 \to k$ is defined as a graph, as the set of natural numbers equipped with the following relation.
\[
x \prec_E y \iff \forall z < x (z \prec_E x \implies c_a(\bp{z,x})=c_a(\bp{z,y})).
\]
$(\NN, \prec_E)$ is recursively enumerable on the coloring, and recursive enumerable if the coloring is recursive. We would like to apply Theorem \ref{theorem:k+1} to produce an infinite branch $r$ as required, but Theorem \ref{theorem:k+1} requires a tree given as set of branches. Thus, we have to prove in $\ha$ that given a graph-tree $(N,\prec_E)$ we can extract a tree $(\tilde{E}, \prec)$ where $\tilde{E} \subset \List(k)$ which keeps all information we need. We define $(a_0, \dots, a_j) \in \tilde{E}$ if and only if there are nodes $x_0, \dots, x_{j+1} \in \NN$ such that for every $i \in j+1$ $c(x_i, x_{i+1})=a_j$ and $x_{i+1}$ is a $\prec_E$-child of $x_i$. $(\NN, \prec_E)$ contains the value of each node while the tree $(\tilde{E}, \prec)$ contains only the color of each edge, but note that given both $(\NN, \prec_E)$ and $(\tilde{E}, \prec)$, we can recursively translate any subtree of $(\tilde{E}, \prec)$ in a subtree of $(\NN, \prec_E)$.
By applying Theorem \ref{theorem:k+1} to the $k$-ary tree $(\tilde{E}, \prec)$, the subtree $T$ of $(\NN, \prec_E)$ which corresponds to $f_C''\NN$ is $\Delta^0_2$ and has exactly one infinite branch, the rightmost.
\end{proof}
Let $T$ be the witness of Proposition \ref{proposition: Erdossubtree}. We may prove that there are infinitely many nodes of the same color in the infinite branch of $T$ using only $\koenig$. Any infinite subset of the infinite branch of $T$ with all nodes in the same color will be some monochromatic set for the original graph. Moreover our proof recursively defines $k$-many monochromatic $\Delta^0_3$-sets, one of each color, that can not be all finite, even if we can not decide which of these is the infinite one.
\begin{theorem}\label{KoenigRamsey}
Let $k \geq 2$. Then $\koenig$ implies $\rt^2_k(\Sigma^0_0)$ in $\ha$.
\end{theorem}
\begin{proof}
Given $T$ the witness of Proposition \ref{proposition: Erdossubtree}, we can prove Ramsey's Theorem for pairs and $k$-many colors in $\llpo{3}$. We have to prove that the infinite branch of $T$ (which exists and it is unique by Proposition \ref{proposition: Erdossubtree}.b) has infinitely many pairs $x \prec_T y$ of color $h$. By Proposition \ref{proposition: Erdossubtree}.c, it is enough to prove that $T$ has infinitely many pairs $x \prec_T y$ of color $c$, for some $h$. By Proposition \ref{proposition: Erdossubtree}.a, $x \in T$ is a $\Delta^0_2$ predicate. Thus, if we apply the Pigeonhole Principle for $\Sigma^0_2$ predicates $(k-1)$-many times, we deduce that $T$ has infinitely many edges in color $h$ for some $h \in k$. However, the Pigeonhole Principle for $\Sigma^0_2$ predicates is a classical principle, therefore we have to derive the particular instance we use from $\koenig$.
\begin{claim}\label{principiocassetti}
$\koenig$ implies the Pigeonhole Principle for $\Sigma^0_2$.
\end{claim}
\begin{proof}[Proof Claim \ref{principiocassetti}.]
The Infinite Pigeonhole Principle for $\Sigma^0_2$ predicates can be stated as follows:
\[ \forall x \ \exists z \ [z \geq x \ \wedge \ (P(z,a) \vee Q(z,a))]\]
\[\implies \forall x \ \exists z \ [z\geq x \ \wedge \ P(z,a)] \ \vee \ \forall x \ \exists z [z \geq x \wedge Q(z,a)],\]
with $P$ and $Q$ $\Sigma^0_2$ predicates.
We prove that the formula above is equivalent in $\ha$ to some formula of $\koenig$. Let
\[
\begin{aligned}
H(x,a) &:= \exists z \ [z \geq x \wedge P(z,a)]\\
K(x,a) &:= \exists z \ [z \geq x \wedge Q(z,a)].
\end{aligned}
\]
In fact both $H$ and $K$ are equivalent in $\ha$ to $\Sigma^0_2$ formulas $H', \ K'$. By intuitionistic prenex properties (see \cite{LICS04})
\[\exists z [z \geq x \ \wedge \ (P(z,a) \vee Q(z,a))]\]
is equivalent to
\[\exists z [z \geq x \ \wedge \ P(z,a)] \ \vee \ \exists z [z \geq x \ \wedge \ Q(z,a)].\]
The formula above is equivalent to $H' \vee K'$. Thus, any formula of Pigeonhole Principle with $P$, $Q$ $\Sigma^0_2$ is equivalent in $\ha$ to
\[
\forall x (H'(x,a) \vee K'(x,a)) \implies \forall x H'(x,a) \vee \forall x K'(x,a),
\]
which is the instance of $\koenig$ with $H', \ K'$.
\end{proof}
Thus, there exist infinitely many edges of $r$ in color $h$, for some $h \in k$. Their smaller nodes define a monochromatic set for the original graph, since given an infinite branch $r$ of an Erd\H{o}s' tree and $x \in r$, if there exists $y \in r$ such that $x \prec_T y$ and $\{x,y\}$ has color $h$, then for every $z \in r$ such that $x \prec_T z$, the edge $\{x,z\}$ has color $h$. Thus we can devise a coloring on $r$, given color $h$ to $x$ if $\{x,y\}$ has color $h$, with $y$ child of $x$ in $r$. After that, every infinite set of points with the same color in $r$ defines an infinite set with all edges of the same color, and then it proves Ramsey's Theorem for pairs in $k$-many colors in $\ha$ starting from the assumption of $\koenig$.
\end{proof} | 0.001469 |
Sunday, July 4th, 2010
Learning how to recognize a fraudulent slip and fall claim can protect your company from a multi-million dollar lawsuit. With a poor economy and high unemployment rates, insurance companies are seeing a dramatic rise in questionable slip and fall claims. In some cases, desperate people see faking an injury as a quick way to make a buck. What scammers don't consider is they could potentially face jail time and thousands of dollars in legal fees if their scheme is uncovered. Fake claims are submitted by all types of people. Grandparents, attorneys, and organized crime rings have all perpetrated slip and fall scams. Fortunately, attorneys for insurance companies have received extensive training on how to recognize a fraudulent slip and fall claim.
It's not uncommon for an unscrupulous person to fake a slip and fall on a damaged sidewalk or any area with a visible defect. Knowing how to recognize a fraudulent slip and fall claim can prevent a scam from happening to you. Fraudulent slip and fall cases frequently occur in areas with limited surveillance coverage. Bathrooms, corridors, entryways, stairways, ramps, and parking lots are high-risk places for fraudulent claims. Surveillance cameras are an excellent deterrent to for fake claim. In many cases, surveillance cameras have caught scammers in the act of staging their injury.
Many fraudulent slip and fall claims are homegrown operations. It's common for parents and grandparents to recruit children and grandchildren to act as supporting witnesses to an accident that never happened. For this reason, witnesses within families are unreliable and can be red-flags indicating a fraudulent claim. Organized crime groups and attorneys have also orchestrated fake slip and fall claims by hiring recruits.
Scammers will stop at nothing to pull-off a fake slip and fall. Victims intentionally pour fluids on the ground or drop grapes, marbles, etc., in order to fake their injuries. Some scammers have gone as far as pretending a genuine injury occurred at a different location. Here are a few more tips for understanding how to recognizing a fraudulent slip and fall claim: claimant has extensive knowledge of the insurance claims process, provides false contact information, refuses to provide work history for lost wage compensation, lack of credible witnesses, claimant hires an attorney immediately, makes threatening demands for immediate cash settlement, history of questionable claims, victim claims to have soft tissue injuries, dizziness, headaches, and other ailments that are hard to prove.
Are You a Slip and Fall Attorney?
Join the Exclusive Slip and Fall Attorney Network!
Recent Articles | 0.928233 |
It happens nationwide. A parent gets out of the car, forgetting the child in the backseat. Hours later, the child is dead.
But what could have happened, had somebody seen the little one?
Currently, no state has legislation allowing people who see the problem to break in. On July 1, that will change, as Tennessee enacts a bill allowing people to break children out of hot cars.
It's backed by Rep. David Hawk. "No other state has legislation similar to this, so this is some groundbreaking Good Samaritan legislation," he says.
The bill outlines several steps if someone does see a child locked in a car, telling them to call law enforcement first letting them know of the intent to break into a vehicle. A person can break into any window and get a child out with limited liability.
Hawk says children die in cars in Tennessee every summer.
Dr. Bruce Gibbon from the Bristol Regional Medical Center says temperatures can reach 160 degrees. "It continues to absorb heat from the sun, depending on the color of the car and how well ventilated it is. The temperature is going to go up very rapidly," he says.
And those temperatures can kill or seriously harm children. "It infects organs like the liver, their kidneys, their brain, they get dehydrated quickly, they get overheated and it causes damage that may not be apparent right away," Dr. Gibbon says.
Hawk hopes that this legislation will fix the problem. "Hopefully this will go a long way to saving a child," he says.
Next for Hawk, he hopes to enact similar legislation for pets within the next two years. | 0.987922 |
Coomatec DVRCam W/R is an outdoor camera which include all-in-one security system. It can record without for an additional SD Card DVR system.DVRCam WR SpecificationVideo Camera BasicsDimensions : 14cmx12cmx10cmGross Weight: 620gPower Adapter : Input:C 100~240VOutput:DC 5V,1AMicro processor : ARM9 3
More...
Coomatec DVRCam dome is the first camera which include all-in-one security system. It can record without for an additional DVR system and be used in convenice store,office or cabin.DVRCam Dome SpecificationVideo Camera BasicsDimensions: 12cmx12cmx10cmGross Weight : 320gPower Adapter: Input:C 100~240
worlds simplest security system: dvrcam dome is the first camera which include all in one security system. it can record without for an additional dvr system.features:plug and playthere is almost no complex setup. it is so easy that everyone can use it. no wiring, no install.keep in nightas the ligh
The worlds most simple surveillance systemCoomatec DVRCam dome is the first camera which include all-in-one security system. It can record without for an additional SD Card DVR system.DVRCam dome HD specificationVideo Camera BasicsDimensions:12cmx12cmx10cmGross Weight:320gPower AdapterInput: C 100~2
work independently:no need for pc and dvr. dvrcam wr dvr cctv camera works on its own. only you need to do is to provide a good quality mirco sd/tf card.infrared night vision:2pcs 3rd generation ir led array of dvrcam wr dvr cctv camera are very effective in the darkness.note: it is normal that the
Worlds simplest security system: DVRCam dome is the first camera which include all in one security system. It can record without for an additional DVR system.DVRCam WR LR specificationVideo Camera BasicsDimensions: 28*13*9cmGross Weight: 820gPower AdapterInput: C 100~240V Output: DC 12V, 1AMicro pro
DVRCam WR CCD specificationVideo Camera BasicsDimensions:28*13*9cmGross Weight:820gPower AdapterInput: C 100~240V Output: DC 12V, 1AMicro processor:ARM9 32-bit microprocessor coreTF Card Support:Up to 32GBWarranty:Two-years partsVideo Camera DetailsType:WhiteSensor:Sony CCD 700TVLDigital Video Forma
DVRCam dome HD specificationVideo Camera BasicsDimensions:28cmx21cmx14cmGross Weight:800gPower AdapterInput: C 100~240V Output: DC 12V, 1AMicro processor:ARM9 32-bit microprocessor coreTF Card Support:Up to 32GBWarranty:Two-years partsVideo Camera DetailsType:whiteDigital Video Format:H.264HD (1080
baby monitor based on wi-fi peer to peer technology. it helps you to view your baby or pets not only cooking or watching tv at home, but also shopping or running outside. c101 is a day vision wificam, not used in the darkness.what can you do?use the wificam by i , android or computer to watch over r
★Maximum talking range 500 meters between two riders, real two-way wireless communication★Among three riders by Bluetooth system★Up to 120Km working speed Up to 7 hours talking time★Auto-receiving cell phone calls★Advanced A2DP & EDR Bluetooth profile★Stereo music/audio function (transmits from A2DP | 0.002251 |
TITLE: Equivalence of metrics by showing they have the same convergent sequences
QUESTION [0 upvotes]: Let $d(x,y)= |x-y| +|x^2-y^2|$, prove that in $\mathbb{R}$, this metric generates the same topology as the usual topology on the reals.
I thought that one way to prove it would be picking an arbitrary sequence $(x_n)$ and seeing that it converges on $(\mathbb{R},d)$ if and only if it converges in $(\mathbb{R},\tau_u)$. I've managed to prove that if $(x_n)$ converges in $(\mathbb{R},d)$, then it converges in the $(\mathbb{R},\tau_u)$, but I'm stuck in proving the other direction.
Let $(x_n)\to \overline{x}$ in $(\mathbb{R},\tau_u)$, that is, for every $\varepsilon >0$, there exists some $N \in \mathbb{N}$ such that if $n>N$, then
$$|x_n-\overline{x}|<\varepsilon$$
Then
$$d(x_n,\overline{x})=|x_n-\overline{x}|+|x_n^2+\overline{x}^2|<\varepsilon(1+|x_n-\overline{x}|)$$
but I don't know how to go further from here. Thanks in advance!
REPLY [0 votes]: If $|x-x_n|$ converges to $0$ then $|x^2-x_n^2|$ also converges to $0$ because for all but finitely many $n$ we have $|x-x_n|<1$, while $|x-x_n|<1\implies |x_n|<|x|+1,$ so for all but finitely many $n$ we have $$|x^2-x_n^2|=|x-x_n|\cdot |x+x_n|\leq |x-x_n|\cdot (|x|+|x_n|)\leq |x-x_n|(2|x|+1).$$ If $A_n$ and $B_n$ converge to $0$ then $A_n+B_n$ converges to $0.$ So with $A_n=|x-x_n|$ and $B_n=|x^2-x_n^2|,$ we have : If $|x-x_n|$ converges to $0$ then $d(x,x_n)$ converges to $0.$
If $a_n$ converges to $0$ with $a_n\geq b_n\geq 0$ then $b_n$ converges to $0.$ So let $a_n=d(x,x_n)$ and $b_n=|x-x_n|,$ and we have: If $d(x,x_n)$ converges to $0$ then $|x-x_n|$ converges to $0.$ | 0.022032 |
TITLE: Is my proof that matrices are diagonalizable iff they have a basis of eigenvectors correct?
QUESTION [2 upvotes]: Question: Show that an $n \times n$ matrix $A$ is diagonalizable if and only if $\mathbb R ^n$ has a basis of eigenvectors of $A$.
My try: If $A$ is diagonalizable, there exists a nonsingular matrix $P$ such that $P^{-1}AP=D$, where $D$ is diagonal containing the eigenvalues of $A$ with corresponding eigenvectors being the columns of $P$. Also, $P^{-1}AP=D$ means that $AP=PD$, that is, $A$ acts in the basis formed by columns $P$ as diagonal matrix $(D)$. If, however, $\mathbb R^n$ does not have a basis of eigenvectors of $A$, it would not be possible to have eigenvectors as columns of $P$. Thus, $\mathbb R^n$ needs to have a basis of eigenvectors of $A$. Hence proved.
REPLY [0 votes]: If $\mathbb R ^n$ has a basis $\{p_1,\dots,p_n\}$ of eigenvectors of $A$, you can store them as columns in the matrix $P\in\mathbb{R}^{n\times n}$.
Since $\{p_1,\dots,p_n\}$ is a basis, $p_i^Tp_j=1$ if $i=j$ and $0$ otherwise. This directly implies $P^TP=I$, so $P^T=P^{-1}$.
Since $p_i$ is eigenvector for any $i=1,\dots n$ to some eigenvalue $\lambda_i$, it holds $Ap_i=\lambda_ip_i$. This implies $AP=PD$ where $D$ is the diagonalmatrix which stores $\lambda_i$ at the $i$-th diagonal entry by definition of matrix multiplication.
Multiplication of $P^T$ from the left to $AP=PD$ yields the assertion. | 0.020717 |
-->
Principal → Afghanistan → Paktia Province → Zurmat
Carte de Zurmat Paktia Province, Afghanistan
Autres Noms: Zarmal,Zormat,Zurmat,Zuṟmat,زرمت, Zurmat
Zurmat
Welcome to the Zurmat google satellite map! This populated place: populated place
- Latitude/longitude:
33.43778/69.02774
- Altitude: 0 m
- Population: 0
- Fuseau horaire:
Asia/Kabul +4.5
- Heure locale: 21:37
- Lever du soleil: 05:38
- Coucher du soleil: 17:56 | 0.058184 |
/-
Copyright (c) 2021 Adam Topaz. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Adam Topaz
-/
import category_theory.limits.shapes.products
import category_theory.functor.epi_mono
/-!
# Adjunctions involving evaluation
We show that evaluation of functors have adjoints, given the existence of (co)products.
-/
namespace category_theory
open category_theory.limits
universes v₁ v₂ u₁ u₂
variables {C : Type u₁} [category.{v₁} C] (D : Type u₂) [category.{v₂} D]
noncomputable theory
section
variables [∀ (a b : C), has_coproducts_of_shape (a ⟶ b) D]
/-- The left adjoint of evaluation. -/
@[simps]
def evaluation_left_adjoint (c : C) : D ⥤ C ⥤ D :=
{ obj := λ d,
{ obj := λ t, ∐ (λ i : c ⟶ t, d),
map := λ u v f, sigma.desc $ λ g, sigma.ι (λ _, d) $ g ≫ f,
map_id' := begin
intros, ext ⟨j⟩, simp only [cofan.mk_ι_app, colimit.ι_desc, category.comp_id],
congr' 1, rw category.comp_id,
end,
map_comp' := begin
intros, ext, simp only [cofan.mk_ι_app, colimit.ι_desc_assoc, colimit.ι_desc],
congr' 1, rw category.assoc,
end },
map := λ d₁ d₂ f,
{ app := λ e, sigma.desc $ λ h, f ≫ sigma.ι (λ _, d₂) h,
naturality' := by { intros, ext, dsimp, simp } },
map_id' := by { intros, ext x ⟨j⟩, dsimp, simp },
map_comp' := by { intros, ext, dsimp, simp } }
/-- The adjunction showing that evaluation is a right adjoint. -/
@[simps unit_app counit_app_app]
def evaluation_adjunction_right (c : C) :
evaluation_left_adjoint D c ⊣ (evaluation _ _).obj c :=
adjunction.mk_of_hom_equiv
{ hom_equiv := λ d F,
{ to_fun := λ f, sigma.ι (λ _, d) (𝟙 _) ≫ f.app c,
inv_fun := λ f,
{ app := λ e, sigma.desc $ λ h, f ≫ F.map h,
naturality' := by { intros, ext, dsimp, simp } },
left_inv := begin
intros f,
ext x ⟨g⟩,
dsimp,
simp only [colimit.ι_desc, limits.cofan.mk_ι_app, category.assoc, ← f.naturality,
evaluation_left_adjoint_obj_map, colimit.ι_desc_assoc, cofan.mk_ι_app],
congr' 2,
rw category.id_comp
end,
right_inv := λ f, by { dsimp, simp } },
hom_equiv_naturality_left_symm' := by { intros, ext, dsimp, simp },
hom_equiv_naturality_right' := by { intros, dsimp, simp } }
instance evaluation_is_right_adjoint (c : C) :
is_right_adjoint ((evaluation _ D).obj c) :=
⟨_, evaluation_adjunction_right _ _⟩
lemma nat_trans.mono_iff_app_mono {F G : C ⥤ D} (η : F ⟶ G) :
mono η ↔ (∀ c, mono (η.app c)) :=
begin
split,
{ introsI h c,
exact (infer_instance : mono (((evaluation _ _).obj c).map η)) },
{ introsI _,
apply nat_trans.mono_app_of_mono }
end
end
section
variables [∀ (a b : C), has_products_of_shape (a ⟶ b) D]
/-- The right adjoint of evaluation. -/
@[simps]
def evaluation_right_adjoint (c : C) : D ⥤ C ⥤ D :=
{ obj := λ d,
{ obj := λ t, ∏ (λ i : t ⟶ c, d),
map := λ u v f, pi.lift $ λ g, pi.π _ $ f ≫ g,
map_id' := begin
intros, ext ⟨j⟩, dsimp,
simp only [limit.lift_π, category.id_comp, fan.mk_π_app],
congr, simp,
end,
map_comp' := begin
intros, ext ⟨j⟩, dsimp,
simp only [limit.lift_π, fan.mk_π_app, category.assoc],
congr' 1, simp,
end },
map := λ d₁ d₂ f,
{ app := λ t, pi.lift $ λ g, pi.π _ g ≫ f,
naturality' := by { intros, ext, dsimp, simp } },
map_id' := by { intros, ext x ⟨j⟩, dsimp, simp },
map_comp' := by { intros, ext, dsimp, simp } }
/-- The adjunction showing that evaluation is a left adjoint. -/
@[simps unit_app_app counit_app]
def evaluation_adjunction_left (c : C) :
(evaluation _ _).obj c ⊣ evaluation_right_adjoint D c :=
adjunction.mk_of_hom_equiv
{ hom_equiv := λ F d,
{ to_fun := λ f,
{ app := λ t, pi.lift $ λ g, F.map g ≫ f,
naturality' := by { intros, ext, dsimp, simp } },
inv_fun := λ f, f.app _ ≫ pi.π _ (𝟙 _),
left_inv := λ f, by { dsimp, simp },
right_inv := begin
intros f,
ext x ⟨g⟩,
dsimp,
simp only [limit.lift_π, evaluation_right_adjoint_obj_map,
nat_trans.naturality_assoc, fan.mk_π_app],
congr,
rw category.comp_id
end },
hom_equiv_naturality_left_symm' := by { intros, dsimp, simp },
hom_equiv_naturality_right' := by { intros, ext, dsimp, simp } }
instance evaluation_is_left_adjoint (c : C) :
is_left_adjoint ((evaluation _ D).obj c) :=
⟨_, evaluation_adjunction_left _ _⟩
lemma nat_trans.epi_iff_app_epi {F G : C ⥤ D} (η : F ⟶ G) :
epi η ↔ (∀ c, epi (η.app c)) :=
begin
split,
{ introsI h c,
exact (infer_instance : epi (((evaluation _ _).obj c).map η)) },
{ introsI,
apply nat_trans.epi_app_of_epi }
end
end
end category_theory
| 0.001191 |
TITLE: Simple use of a permutation rule in calculating probability
QUESTION [2 upvotes]: I have the following problem from DeGroot:
A box contains 100 balls, of which 40 are red. Suppose that the balls are drawn from the box one at a time at random, without replacement. Determine (1) the probability that the first ball drawn will be red. (2) the probability that the fifteenth ball drawn will be red, and (3) the probability that the last ball drawn will be red.
For the first question:
the probability that the first ball drawn will be red should be:
$$\frac{40!}{(100-40)!}$$
the probability that the fifteenth ball will be red:
$$\frac{60! \times 40!}{(60-14)! \times (40-1)}$$
the probability that the last ball drawn will be red:
$$\frac{60! \times 40!}{100}$$
Am I on the right track with any of these?
REPLY [2 votes]: I find it instructive to lead students through the horrible, brute
force calculation before teaching them that the end result is best
understood using symmetry.
Let's calculate the probability that the 15th ball is red, taking
into account all the balls drawn previously. For example, one of
the outcomes that make up our event is
$$RNNRNNNRRRNRNRR$$
where $R$ means a red ball and $N$ a non-red ball.
The probability of getting this particular outcome is
$${40\over 100}\cdot{60\over 99}\cdot{59\over 98}\cdots {33\over 86}={(40)_8 \, (60)_7\over (100)_{15}}.$$
We simplify the product of fractions using the Pochhammer symbol. The "7" and "8" are the number of $N$s and $R$s in the outcome.
Adding the probabilities of all such outcomes gives
$$\mathbb{P}(\mbox{15th ball is red})={1\over (100)_{15}}\sum_k {14\choose k} (40)_{15-k}\ (60)_k $$
$$={1\over (100)_{15}}\ 40\ \sum_k{14\choose k} (39)_{14-k}\ (60)_k ={1\over (100)_{15}}\ 40\ (99)_{14} ={40\over 100}.$$ Amazing! | 0.86778 |
Lexie Elkins First Pro Hit Was A Two Run Blast
Lexie Elkins is the greatest player to ever suit up in catcher's gear for Ragin' Cajuns Softball, and she started her professional career off in classic Lexie fashion by smashing a ball over the wall.
She hit 75 career home runs as a Ragin' Cajun, so it's fitting her first professional hit for the Pennsylvania Rebellion was a two-run bomb.
To sweeten the deal, the Rebellion won the game too. A win and a home run in your first career game at the pro level. Not too shabby, Lexie.
If you want to keep up with Lexie's career, you can watch her games online with a subscription to National Pro Fastpitch TV (NPF TV). If you have a little spare change to throw at it, Elkins and the rest of her teammates would surely love the support from the softball world.
Keep making Cajun Nation proud, Lexie, and keep crushing those softballs. Not everybody gets to live their dream every day, so make the most of every opportunity and remember your Ragin' Cajuns family will always have your back.
(P.S.-We're working on getting footage of the shot too from the Rebellion, so we'll make sure to update you once it finds its way back to Lafayette...) | 0.001326 |
TITLE: Hilbert space of quantum gravity: bulk $\otimes$ horizon
QUESTION [5 upvotes]: I was reading a paper dealing with the Hilbert space of quantum gravity (or more precisely what should it look like considering what we know from QM and GR) ref: http://arxiv.org/abs/1205.2675 and the author writes the following:
$${\cal{H}_M} = {\cal{H_{M,\,\textrm{bulk}}}}\otimes{\cal{H_{M,\,\textrm{horizon}}}}$$ for a specific manifold $\cal{M}$. I know very little about the holographic principle and the AdS-CFT correspondence but isn't it a redundant description? If there is a duality between the gravitational theory in the bulk and the CFT on the boundary, knowing one means knowing the other, so why can't we restrict ourselves to one of the Hilbert spaces? Moreover, the author writes, a couple of lines after this first element, that the two Hilbert spaces have the same dimension ( $\textrm{exp}({\frac{{\cal{Area}}}{4}})$ ) so they are totally equivalent, as a complex Hilbert space is only defined by its dimension.
REPLY [5 votes]: Yes it is redundant. This is exactly what AdS/CFT is not. The degrees of freedom of the bulk are the degrees of freedom of the horizon. This is also why condensed matter analogs are rare--- the most common idea of identifying AdS/CFT boundary theories with condensed matter boundary theories is wrong, because in tranditional condensed matter systems, the boundary degrees of freedom are in addition to the bulk degrees of freedom, they are not dual to these degrees of freedom, as in AdS/CFT. The exception, where the condensed matter analog is right, is where the bulk theory is topological, like the Chern-Simons theory for the quantum hall fluid, where you can consider edge-states as describing interior physics. There might be more analogs of this sort. One has to be careful, because a lot of people have this wrong picture of AdS/CFT in the head, that it's boundary stuff in addition to bulk stuff. | 0.02587 |
TITLE: Is the sum of all natural numbers $-\frac{1}{12}$?
QUESTION [5 upvotes]: My friend showed me this youtube video in which the speakers present a line of reasoning as to why
$$
\sum_{n=1}^\infty n = -\frac{1}{12}
$$
My reasoning, however, tells me that the previous statement is incorrect:
$$
\sum_{n=1}^\infty n = \lim_{k \to \infty} \sum_{n=1}^k n = \lim_{k \to \infty}\frac{k(k+1)}{2} = \infty
$$
Furthermore, how can it be that the sum of any set of integers is not an integer. Even more, how can the sum of any set of positive numbers be negative? These two ideas lead me to think of inductive proofs as to why the first statement is incorrect.
Which of the two lines of reasoning is correct and why? Are there any proven applications (i.e. non theoretical) which use the first statement?
REPLY [6 votes]: [This is a slightly modified version of a previous answer.]
You are right to be suspicious. We usually define an infinite sum by taking the limit of the partial sums. So
$$1+2+3+4+5+\dots $$
would be what we get as the limit of the partial sums
$$1$$
$$1+2$$
$$1+2+3$$
and so on. Now, it is clear that these partial sums grow without bound, so traditionally we say that the sum either doesn't exist or is infinite.
So, to make the claim in your question title, you must adopt a nontraditional method of summation. There are many such methods available, but the one used in this case is Zeta function regularization. That page might be too advanced, but it is good to at least know the name of method under discussion.
You ask why this nontraditional approach to summation might be useful. The answer is that sometimes this approach gives the correct result in a real world problem. A simple example is the Casimir effect. Suppose we place two metal plates a very short distance apart (in a vacuum, with no gravity, and so on -- we assume idealized conditions). Classical physics predicts they will just be still. However, there is actually a small attractive force between them. This can be explained using quantum physics, and calculation of the magnitude of the force uses the sum you discuss, summed using zeta function regularization.
REPLY [6 votes]: There are many ways how infinite divergent sums can be manipulated to converge. I suggest you look up Euler summation and Ramanujan summation. One example of this is the sum of the powers of $2$:
$$1+2+4+8+16+...=S$$
If we multiply both sides by two and subtract them, we end up with
$$S=-1$$
Hence, the sum of infinitely many positive integers can be negative.
We can also "test" our result to see if it works:
$$-1=1+2+4+8+16+...$$
$$0=1+1+2+4+8+16+...$$
$$0=2+2+4+8+16+...$$
$$0=4+4+8+16+...$$
and so on, cancelling out every power of two.
This convergence property has very important implications in many scientific disciplines, especially physics, where the normalization property is established. This cancels out the infinities involved in quantum theory, string theory, etc. | 0.27082 |
The monetary and operational method is prime to the construction, group and operation of companies in a competitive and all the time altering world environment. As more technology is added to automobiles, auto producers are creating products which can be extremely advanced and complicated. And AWS and Amazon Automobiles are partnering with Cadillac and ZeroLight to exhibit a brand new, streamlined automotive-buying concept featuring the Cadillac XT6 at CES 2020.
The auto industry is at a crossroad as it tries to enhance its funding attractiveness to gasoline growing human, financial and bodily capital investment must convey an unprecedented degree of recent technology to market whereas at the similar time competitive forces and regulatory necessities challenge the idea of the trade’s main aggressive covenant: economies of scale.
Ghostwriters like Jeff Haden have created very lucrative careers for themselves by writing for enterprise executives and CEO’s’â€and Jeff additionally began his ghostwriting profession as a facet enterprise thought outdoors of his full-time job as a producing facility supervisor.
The BA (Hons) Business and Finance diploma at Lincoln objectives to equip school college students with the units and information to operate in an enlargement of enterprise environments, and to develop a broad understanding of business and finance from a world perspective.
Upon getting some cash to speculate, you may take advantage of value variations on second marketplaces like Swappa, Gazelle, Craigslist, native Fb groups, and even your native electronics retailer to advertise merchandise for a revenue. Vehicle applications, in flip, tackle the price of these powertrain programs as they are allotted over the whole unit volumes incorporated into the automobile platform business instances. | 0.74622 |
Learn more about why global companies rely on FAST/TOOLS for their critical infrastructure operations:
- High performance, 400,000 updates per second throughput
- High capacity and broad scalability with up to 16 million points per server and 4096 servers per system
- Truly open architecture with industry-standard best practices applied throughout
- Platform independence through continuous support of Linux, Unix and Windows
- Future-proof lifecycle management and simplified upgrades
- Minimum TCO via our highly simplified deployment and version management
- Out-of-the-box redundancy with 99.999% availability
- Provide a powerful HMI with flexibility and capabilities that surpass standard SCADA HMI’s
- Support mobile devices (Web browser based) with its HTML5 HMI environment
- Enterprise Automation Solution (EAS) and Industrial Internet of Things (IIoT) architecture
- As a Service (XaaS), cloud and virtualization support
Details
Architecture:
Truly Open Software Architectures.
Future Proof Lifecycle Management.
High Performance and High Capacity.
Broad Scalability
Scalability, system expansions and system sizing are also non-issues. FAST/TOOLS supports database capacities up to 16 million points in a single server, 4096 servers in a system and virtually unlimited historical archives.
Optimized Operator Visualization and Advanced Decision Support).
Advance Operating Graphics (AOG)
Advanced Operating Graphics (AOG) is a consulting service provided by Yokogawa to design PCS user interface based on human factors and knowledge engineering in order to improve users’ situation awareness.
- Identify user, task and functional requirements for operation that require HMI support.
- Analyze the requirements in order to determine the needs and conditions to meet for the project.
- Provide guidance on designing and developing user interface including display layout, navigation, hierarchy, color pallet, data visualization, etc.
- Submit final report of achievements in the project.
Figure AOG Consulting Process
Advanced Operating Graphics symbol Library.
Real Time Operations Platform.
Enterprise Automation Solution (EAS).
Engineering
FAST/TOOLS many engineering features and functions make it one of the most extensive SCADA solutions today. It enables flexible and remote engineering capabilities.
Engineering Overview.
Enterprise Engineering.
Information Model (Process Hierarchy).
Symbol Library
FAST/TOOLS provide a large number of symbols. Currently as standard more than 3000 industrial and manufacturing symbols are included. These libraries can be extended with any number of custom symbols.
Object Based Engineering.
Object in Objects.
Efficient Engineering.
Online Modifications.
HMI.
Mobile HTML Clients
FAST/TOOLS supports the deployment of mobile devices (Web browser based) with its HTML5 HMI environment. This enables users to be able to create powerful client applications that bring HMI graphics, diagnostics and alarms to any device in the field with a web browser.
Zero Deployment.
Flexible HMI modeling.
Multi Node Input.
Collaboration Decision Support Solution.
Layers.
Users
Role Based Authorization
User Role Management.
Defining Roles:
- Which screens are available, and what information is presented per role
- Control actions that are allowed are shown
- Available and viewable data and areas
- Reports and trends that can be generated
- Alarm management, annunciation and acknowledgement
- Recording, automated or manual, for specific actions or operational sequences
- Complete logging of actions within the environment
- Engineering options, if at all.
Audit Trail.
Record & Playback.
System Integrity
Data is one of the most valuable assets of any organization. Making sure that data is available and secure throughout an environment will increase its effectiveness. FAST/TOOLS system integrity features will ensure an optimal performing system environment and secure its content and uptime.
Server Redundancy Synchronization
Data is continuously synchronized between FAST/TOOLS servers using integrity checks and updates at fail-over and initial system start-up. The embedded watchdog decides whether the system is sufficiently healthy to carry on and take the least intrusive course of action to recover from any failure.
Server Failback
Natively integrated failover/fallback features in the FAST/TOOLS software architecture secure an optimized and effective redundancy of your control environment. Automated network checks and instant switch-over between servers.
Automated Storage Management.
Automated Archiving.
IT Level Security Compliance.
Active Directory Integration & System Security).
Connectivity & Integration
Using an extensive array of integration, drivers and support for industry communication standards, FAST/TOOLS can be the glue that binds all your information platforms together.
Media Independent.
Field I/O.
Data Acquisition.
DCS Integration.
OPC Certified.
RDBMS.
Reporting & Trending
Utilize FAST/TOOLS comprehensive reporting & trending platform for effective leveraging of information throughout the entire organization.
Table Component.
Extensive Reporting.
Trending.
Correlational Trending.
Offline Trending.
Export.
Alarm Management
FAST/TOOLS features a fully integrated and extensive alarm management module. Not only does this allow for more control and security, but it also enables for a more in-depth analysis of your alarm environment and efficiency.
Advanced Filtering.
Reporting.
Alarm System Performance Analysis & Notifications
Routing of alarms provides possibilities for quick and efficient alarm handling. Alarm options such as setting time limits priority of alarming rerouting destinations. Apply routing for both escalation procedures as well as shift planning.
Acknowledgement.
Sequence Of Alarm
All field I/O-signals that are configured for alarming can have a status (High-High, High, Low etc.)
FAST/TOOLS allows for deciding the alarm annunciation for each status or I/O signal.
Alarm Groups.
Resources
- Integration of the current Bangalore Water Supply and Sewage Board (BWSSB) water and sewage facilities located widely in Bengaluru over VHF and GPRS.
- Stable Centralized SCADA Monitoring Center (CSMC) operation.
The Metropolitan Cebu Water District (MCWD) needed to improve its water supply system to keep up with the growing demand.
Yokogawa installed field instruments at a total of 143 locations in MCWD’s water distribution network and a SCADA system with water leakage management software at the main office to reduce non-revenue water and improve stability of water supply.
RDR operates total 67km long race and supplies irrigation water to vast agricultural land in NZ's South Island.
FAST/TOOLS and STARDOM RTU were selected for management of the race and hydropower systems.
Territory Generation supplies electricity to the NT's grid with their 8 geographically isolated power stations.
Territory Generation selected FAST/TOOLS to integrate new & renewable energy to the power grid.
- Easy set point definition for all plant power generation operations
- Stable electric power generation
- Power plant optimization
- Control of electric power generation based on net performance
- OTS for familialization of liquefaction processes and operator training.
- Yokogawa's integrated solutions contribute to safe and steady production at gas liquefaction plant in Norway..
- Australian National University's "Big Dish" controlled by Yokogawa PLC and FAST/TOOLS SCADA
- FAST/TOOLS benefits us in many ways and allows us to clearly see the entire process, giving us the information we need to take immediate action
- Scalable, flexible configuration with functions distributed to multiple controllers on a facility basis
- Redundant architecture optimally designed for requirements of entire plant
- General-purpose communications network used for control bus
- Integrated operation environment through web-based human-machine interface (HMI)
- Cooperation with WISAG introduces building automation solutions and reduces energy costs.
- The particular advantage of FAST/TOOLS is its standardized and open interfaces..
- FAST/TOOLS SCADA system to remotely monitor the optimization of the process and make any required engineering changes to the live system.
- The Yokogawa control system (FAST/TOOLS, STARDOM, HXS10) monitors not only the sunlight but also the weather conditions.
Water Supply Treatment
FAST/TOOLS, STARDOM, Flowmeters, Liquid Analyzers
- Bali's water treatment plant decided to employ the latest reliable control system in order to increase availability and quality of operation.
- Centralized control system using FAST/TOOLS provides the sophisticated and flexible operation.
Granite Power is a geothermal company that has developed GRANEX®, a patented direct supercritical fluid heat transfer technology for the efficient, eco- nomic, and zero carbon emission generation of electricity from low grade geothermal sources using the Organic Rankine Cycle (ORC).
- Uninterrupted Medical Oxygen Production with FAST/TOOLS Monitoring.
- Audits are performed periodically, requiring traceability and the keeping of historical records for all batch production activities.
- Reliable monitoring and control system of the world's first, 16" seawater reverse osmosis (SWRO) membrane desalination plant.
- Accurate measurements of the SWRO parameters with Yokogawa conductivity meters, pH meters, and magnetic flow meters contribute to a longer SWRO membrane life and reduce Total Cost of Ownership.
India's flagship natural gas company GAIL Limited integrates all India's Gas Pipelines by a Yokogawa FAST/TOOLS SCADA system.
- FAST/TOOLS SCADA system centralized monitoring & control of India's gas pipelines.
- All operation data can be directly utilized for gas allocation and billing.
- Yokogawa's HXS10 solar controller optimizes conversion efficiency at Australian solar cooling plant
- Accurate sun tracking and visualization of all process data..
Downloads
Videos Aug 26, 2018
Yokogawa Addresses Major Threat to Flow Assurance with On-demand Hydrate Risk Management Solution for Oil & Gas Production
- Industry-first (according to Yokogawa research, as of August 27, 2018 ) solution to be demonstrated at ONS 2018 -
Press Release Jul 26, 2018
Yokogawa to Deliver Statewide Centralized Water Information Management System in India Nov 20, 2017
Yokogawa Releases Enhanced Version of the STARDOM Network-based Control System
- Highly reliable and scalable -
Looking for more information on our people, technology and solutions?Contact Us | 0.347026 |
As a professional kickboxer, you expect to take a nasty cut every now and then. But you probably don't expect your doctor and cornermen to joke around with your cuts.But that's exactly what happened to John Wayne Parr, an Australian middleweight kickboxer. After losing to Cosmo Alexandre in the main event of the Lion Fight 25 title fight in California, Parr uploaded a video to YouTube on October 24 showing his doctor and crewman having fun with one of his wounds, a gaping hole in the side of his skull. The hole required 15 stitches, but his doctor thought it would be funny to first make it "talk." Credit: YouTube/John Wayne Par - | 0.052115 |
\begin{document}
\maketitle
\begin{abstract}
We study stable immersed capillary hypersurfaces $\Sigma$ in domains $\mathcal B$ of $\Bbb R^{n+1}$ bounded by hyperplanes. When $\mathcal B$ is a half-space, we show $\Sigma$ is a spherical cap. When $\mathcal B$ is a domain bounded by
$k$ hyperplanes $P_1,\ldots,P_k,\, 2\leq k\leq n+1,$ having independent normals, and $\Sigma$ has contact angle $\theta_i$ with $P_i$ and does not touch the edges of $\mathcal B,$ we prove there exists $\delta>0,$ depending only on $P_1,\dots,P_k,$ so that if
$\theta_i\in (\frac{\pi}{2}-\delta, \frac{\pi}{2}+\delta)$ for each $i,$ then $\Sigma$ has to be a piece of a sphere.
\end{abstract}
\section{Introduction}\label{sec:introduction}
A capillary hypersurface in a domain $\mathcal B$ of a Riemannian manifold is a constant mean curvature (CMC) hypersurface in $\mathcal B$
meeting $\partial\mathcal B$ at a constant angle. Capillary hypersurfaces are critical points for an energy functional for variations which preserve the {\it enclosed volume}.
More precisely, given an angle $\theta\in(0,\pi),$ for a compact hypersurface $\Sigma$ inside $\mathcal B$ such that $\partial \Sigma \subset \partial \mathcal B$ and $\partial \Sigma$ bounds a compact domain $W$ in
$\partial \mathcal B,$ the energy of $\Sigma$ is by definition the quantity
\[ E(\Sigma):= |\Sigma| -\cos\theta \, |W|.
\]
Here and throughout this paper, we use the notation $|M|$ to denote the volume of a Riemannian manifold $M.$
The energy functional $E$ can be defined for general immersed hypersurfaces, not only those with boundary bounding a compact domain in $\partial\mathcal B.$ Details can be found in \cite{A-S}.
The stationary hypersurfaces of $E$ for variations preserving the {\it enclosed volume} are precisely the
CMC hypersurfaces making a constant angle $\theta$ with $\partial \mathcal B.$
Existence and uniqueness problems of capillary hypersurfaces are interesting on their own right. In the minimal case, when $\mathcal B$ is the unit ball in $\Bbb R^3$ and the angle of contact is $\frac{\pi}{2},$ these questions proved to be related to deep problems in Geometric Analysis, see for instance the work of Fraser-Schoen \cite{fraser-schoen}. When $\mathcal B$ is a domain in $\Bbb R^3,$ capillary surfaces correspond to models of incompressible liquids inside a container $\mathcal B$ in the absence of gravity. The free interface of the liquid (locally) minimizes the energy functional $E.$ It is then a natural problem to study the stable ones, that is, those for which the second variation of the energy functional is nonnegative for all volume preserving variations. Stability issues for capillary hypersurfaces have been recently actively addressed in different ambient manifolds and domains, see for instance \cite{A-S, choe-koiso, li-xiong, marinov, nunes, souam, wang-xia}.
One of the achievements in this direction is the proof by Wang-Xia \cite{wang-xia} that in a unit ball in a space form, totally umbilical hypersurfaces are the only stable capillary hypersurfaces (Nunes \cite{nunes} previously solved the free boundary case in a unit ball in $\Bbb R^3$).
In this paper, we are interested in domains $\mathcal B$ in $\Bbb R^{n+1}$ bounded by a finite family of hyperplanes. The first case we consider is when $\mathcal B$ is a half-space in $\Bbb R^{n+1}.$ In this case, it was shown by Ainouz-Souam \cite{A-S} that a stable capillary hypersurface with contact angle $\theta< \frac{\pi}{2}$ is a spherical cap provided each of its boundary components is embedded. Choe-Koiso \cite{choe-koiso} obtained the same result when $\theta > \frac{\pi}{2}$ under the assumption that the boundary is convex and Marinov \cite{marinov} treated the case $n=2$ assuming the boundary is embedded. In our main result (Theorem \ref{thm:half-space}) we remove the boundary embeddedness assumption thus characterizing the spherical caps as the only stable immersed capillary hypersurfaces in half-spaces in $\Bbb R^{n+1}, n\geq 2.$ The second case we study is when $\mathcal B$ is a domain in $\Bbb R^{n+1}$ bounded by a finite number of hyperplanes $P_1,\ldots, P_k$ with linearly independent normals. Li-Xiong \cite{li-xiong} showed that, in this situation, a stable capillary hypersurface meeting each $P_i$ with a constant angle $\theta_i\in [\pi/2,\pi]$ and not touching the edges of $\mathcal B$ is a piece of a sphere under the assumption that its boundary is embedded for $n=2$ and that each of its boundary components is convex for $n\geq 3.$ The case $k=2,$ that is, when $\mathcal B$ is a wedge, was previously obtained by Choe-Koiso \cite{choe-koiso}. We here prove (Theorem \ref{thm:planar boundaries}) the existence of
a positive number $\delta>0$ such that if $\Sigma$ is a stable immersed capillary hypersurface in $\mathcal B$ not touching the edges of $\mathcal B$ and making an angle
$\theta_i\in (\frac{\pi}{2} -\delta, \frac{\pi}{2}+\delta)$ with each $P_i,$ then $\Sigma$ is a piece of a sphere. We emphasize that in our results we do not assume the boundary of the hypersurface is embedded.
\section{Preliminaries}
We here recall briefly some basic facts about capillary hypersurfaces and refer
to \cite{A-S} for more details.
Let $n\geq 2$ be an integer and $\mathcal B$ a domain in $\Bbb R^{n+1}$ bounded by a finite number $P_1,\ldots, P_k$ of hyperplanes, $k\geq 1.$
A capillary hypersurface in $\mathcal B$ is a compact and constant mean curvature (CMC) hypersurface contained in $\mathcal B$ with boundary on $\partial \mathcal B$ and meeting each $P_i$ with a constant angle $\theta_i\in(0,\pi).$
Consider a capillary hypersurface defined by a smooth immersion $\psi: \Sigma\longrightarrow \mathcal B$ and let $N$ be a global unit normal to $\Sigma$ along $\psi$ chosen so that its (constant) mean curvature satisfies $H\geq 0.$ Write $\partial\Sigma=\cup_{i=1}^k \Gamma_i,$ with $\psi(\Gamma_i)\subset P_i.$ By an edge of $\mathcal B$ we mean a non empty intersection $P_i\cap P_j$ with $i\neq j.$
In the sequel we will suppose that the hypersurface does not touch the edges of $\mathcal B$ so that $\psi(\partial\Sigma)$ lies on the smooth parts of $\partial\mathcal B$ where the unit normal is well defined. This condition can be weakened slightly but we will restrict ourselves to this case for simplicity.
Capillary immersions in $\mathcal B$ are critical points of an energy functional for deformations through immersions in
$\mathcal B,$ mapping $\partial\Sigma$ into $\partial\mathcal B$ and preserving the {\it enclosed volume}. When $\psi$ is an embedding with each $\psi(\Gamma_i)$ bounding a domain $W_i\subset P_i,$ the energy writes:
$$E(\psi)= |\psi(\Sigma)| -\sum_{i=1}^k \cos\theta_i |W_i|$$
and the enclosed volume is the volume of the domain in $\Bbb R^{n+1}$ bounded by $\psi(\Sigma)$ and $\cup_{i=1}^k W_i.$ This energy functional and the notion of enclosed volume can be extended in a suitable way to include general immersions, details can be found in \cite{A-S}.
We denote by $\bf n_i$ the exterior unit normal to $P_i.$ The angle of contact along $\Gamma_i$ is
the angle $\theta_i \in (0,\pi)$ between $N$ and $\bf n_i$. We let $\nu_i$ be the exterior unit normal to $\Gamma_i$ in $\Sigma$ and $\bar\nu_i$ be the unit normal to $\Gamma_i$ in $\partial\mathcal B$ chosen so that
$\{N,\nu_i\}$ and $\{{\bf n_i }, \bar\nu_i\}$ have the same orientation in $(T\partial\Sigma)^{\perp}.$ The angle between $\nu_i$ and $\bar\nu_i$ is also equal to $\theta_i.$
The index form $\mathcal I$ of $\psi$ is the symmetric bilinear form defined on
$\mathcal C^\infty(\Sigma)$ by
\[ \mathcal I (f,g)=-\int_{\Sigma} f\left(\Delta g+ |\sigma|^2\, g\right) dA+
\sum_{i=1}^k \int_{\Gamma_i}f\left(\frac{\partial g}{\partial \nu_i}- q_ig\right)\,ds,
\]
where $\Delta$ stands for the Laplacian of the metric induced by $\psi,$ $\sigma$ is the second fundamental form of $\psi$ and
\[ q_i= \cot\theta_i \,\sigma(\nu_i,\nu_i).
\]
A capillary immersion is said to be stable if the second variation of the energy $E$ is nonnegative for all volume preserving deformations. This condition is equivalent to
$$\mathcal I(f,f)\geq 0\quad {\text {for all }}\quad f\in \mathcal C^\infty
(\Sigma)\quad {\text with} \quad \int_{\Sigma} f\,dA =0,$$
see \cite{A-S}.
Let $D$ denote the standard covariant differentiation on $\Bbb R^{n+1}.$ A basic well-known fact we will use is the following (see for instance \cite{A-S}):
\begin{proposition}\label{lem:umbilicity}
Let $\mathcal B$ be as above and $\psi:\Sigma\longrightarrow \mathcal B$ a capillary immersion into $\mathcal B.$ Then, the unit conormal $\nu_i$ along $\Gamma_i$ is a principal direction of $\psi,$ that is, $D_{\nu_i} N =-\sigma(\nu_i,\nu_i)\,\nu_i.$
\end{proposition}
\section{The results}
A key ingredient we will use is the following fact obtained in \cite{A-S}, we include the proof for completeness.
\begin{proposition}\label{prop:normalintegral}
Let $\psi:\Sigma\to \Bbb R^{n+1}$ be a smooth immersion in the Euclidean space $\Bbb R^{n+1}$ of a smooth compact orientable $n-$dimensional manifold $\Sigma$ with or without boundary. Let $N:\Sigma \to \Bbb S^n\subset\Bbb R^{n+1}$ be a global unit normal of $\psi$ and $\nu$ the exterior unit conormal to $\partial\Sigma$ in $\Sigma$. Then
\begin{equation}\label{eq:normal}
n\int_{\Sigma} N \, dA= \int_{\partial\Sigma} \{ \langle \psi,\nu\rangle N - \langle \psi,N\rangle \nu\} \,ds
\end{equation}
where $dA$ and $ds$ denote the volume elements of $\Sigma$ and $\partial\Sigma,$ respectively.
In particular, if $\Sigma$ has no boundary, then
\begin{equation*}
\int_{\Sigma} N \, dA= {0}.
\end{equation*}
\end{proposition}
\begin{proof}
Let $a$ be a constant vector field on $\Bbb R^{n+1}.$ Consider the following vector field on $\Sigma $
$$ X= \langle a, N\rangle \psi^T - \langle \psi,N\rangle { a}^T, $$
where, $\psi^{T}={\psi}-\langle \psi,N\rangle N$ (resp. ${ a}^T= {a}-\langle a, N\rangle N$ ) is the projection of $\psi$ (resp. of $ a$) on $T\Sigma$.
Using the following well known formulas that can de thoroughly checked:
$$ {\text {div}} \,\psi^{T} = n +nH\langle \psi,N\rangle,\qquad {\text {div}}\,
{ a}^T= n H \langle a, N\rangle,$$
we compute the divergence of $X$:
\begin{align*} {\text {div}}\, X&= \langle a, N\rangle \,{\text {div}}\, \psi^T + \langle a, D_{\psi^T} N\rangle - \langle \psi, N\rangle \text{div}\, {a}^T - \langle {a}^T, N\rangle -\langle \psi, D_{{a}^T} N\rangle \\
&= n \langle a, N\rangle + nH\langle \psi,N\rangle \langle a, N\rangle + \langle {a}^T, D_{\psi^T} N\rangle-n H
\langle {a}, N\rangle \langle \psi,N\rangle -\langle \psi^T, D_{{a}^T} N\rangle \\
&= n\langle a, N\rangle,
\end{align*}
where we used the symmetry of the Weingarten map, that is, $\langle {a}^T, D_{\psi^T} N\rangle=\langle \psi^T, D_{{a}^T} N\rangle.$
Integrating on $\Sigma$ and using the divergence theorem we get
\begin{equation*}
n\int_{\Sigma}\langle a, N\rangle \, dA= \int_{\partial\Sigma} \{ \langle a, N\rangle\langle \psi,\nu\rangle - \langle \psi,N\rangle \langle a, \nu \rangle\} ds
\end{equation*}
Since this is true for any $a$, (\ref{eq:normal}) follows.
\end{proof}
We now prove our first result characterizing the spherical caps as the only stable immersed capillary hypersurfaces in half-spaces in $\Bbb R^{n+1}.$ This was proved under additional assumptions on the boundary and the angle of contact in \cite{A-S}, \cite{choe-koiso} and \cite{marinov}. As it will be clear from the proof, it is not necessary to assume the hypersurface is contained in a half-space, we only use the fact that its meets a hyperplane with a constant angle.
\begin{theorem}
\label{thm:half-space}
Let $\psi:\Sigma\to \Bbb R^{n+1},$ be a capillary immersion of a compact and connected orientable manifold $\Sigma$ of dimension $n$ in a half-space in $\Bbb R^{n+1},\, n\geq 2.$ If the immersion is stable then
$\psi(\Sigma)$
is a spherical cap.
\end{theorem}
\begin{proof}
Without loss of generality, we may suppose the half-space is the upper half-space $ x_{n+1}\geq 0.$ The starting point is to derive a suitable test function inspired by the work of Barbosa-do Carmo \cite{barbosa-do carmo} proving that round spheres are the only compact and closed stable CMC hypersurfaces in $\Bbb R^{n+1}.$
Integrating the equation
$ {\text {div}} \,\psi^{T} = n +nH\langle \psi,N\rangle,$
we get
\begin{equation}\label{eq1}
\int_{\partial \Sigma} \langle \psi,\nu\rangle ds = n\int_{\Sigma} \{1+H\langle \psi,N\rangle\}dA.
\end{equation}
We denote, as usual, by $e_1,\ldots,e_{n+1}$ the vectors of the canonical basis of $\Bbb R{n+1}$.
On $\partial\Sigma$, we have $\, \cos\theta\, N+\sin\theta\, \nu=-e_{n+1}, \,\langle \psi,\nu\rangle=\cos\theta \langle \psi,\bar\nu\rangle$ and
$\langle \psi,N\rangle=-\sin\theta \langle \psi,\bar\nu\rangle.$ Proposition \ref {prop:normalintegral} gives in this case
\begin{equation}\label{eq:normalintegral}
n \cos\theta\int_{\Sigma} N\, dA = -\left( \int_{\partial\Sigma} \langle \psi,\nu\rangle\, \,ds\right)\, e_{n+1}.
\end{equation}
From (\ref{eq1}) and (\ref{eq:normalintegral}), we conclude that:
\begin{equation*}
\int_{\Sigma} \{1+H\langle \psi, N\rangle +\cos\theta \langle N,e_{n+1}\rangle\} \,dA =0.
\end{equation*}
So we may use $\phi:=1+H\langle \psi, N\rangle +\cos\theta \langle N,e_{n+1}\rangle$ as a test function in the stability inequality.
Set $u=\langle \psi,N\rangle$ and $v=\langle N,e_{n+1}\rangle.$ These functions satisfy the following well-known formulas
\begin{equation}\label{eq3}
\Delta u +|\sigma|^2 u =-nH,
\end{equation}
and
\begin{equation}\label{eq4}
\Delta v+ |\sigma|^2 v =0.
\end{equation}
Using these equations, we compute:
\begin{align*}
\Delta \phi&= H(-nH-|\sigma|^2 u)-\cos\theta \, |\sigma|^2 v\\
&=-nH^2-|\sigma|^2(H u+\cos\theta \, v)\\
&=-nH^2-|\sigma|^2(\phi-1).
\end{align*}
Therefore
\begin{equation*}
\phi\Delta\phi+ |\sigma|^2\phi^2 =(|\sigma|^2-nH^2)\phi.
\end{equation*}
Moreover:
\begin{equation*}
\frac{\partial u}{\partial \nu}= \langle \nu, N\rangle + \langle \psi, D_{\nu} N\rangle=-\sigma(\nu,\nu)\langle \psi,\nu\rangle\\
\end{equation*}
and
\begin{equation*}
\frac{\partial v}{\partial \nu}= \langle D_{\nu}N, e_{n+1}\rangle=-\sigma(\nu,\nu)\langle\nu,e_{n+1}\rangle=\sigma(\nu,\nu)\sin\theta,
\end{equation*}
Now, taking into account that, along $\partial\Sigma: \,\langle \psi,\nu\rangle=\cos\theta \langle \psi,\bar\nu\rangle$ and
$\langle \psi,N\rangle=-\sin\theta \langle \psi,\bar\nu\rangle,$ one can check after direct computations that:
\begin{equation}\label{eq:normalderivative}
\frac{\partial \phi}{\partial \nu}= \cot\theta\, \sigma(\nu,\nu) \,\phi.
\end{equation}
It follows that
\begin{equation}\label{eq7}
\mathcal I(\phi,\phi)= -\int_{\Sigma} (|\sigma|^2-nH^2)\phi\, dA= -\int_{\Sigma} (|\sigma|^2-nH^2)(1+H\langle \psi,N\rangle
+\cos\theta \langle N,e_{n+1}\rangle) \, dA.
\end{equation}
We now introduce the following function:
$$ F=|H\psi+N|^2= H^2|\psi|^2+2H\langle \psi,N\rangle+1.$$
One has $\Delta|\psi|^2= 2n(1+H\langle\psi,N\rangle)$ and $\Delta \langle\psi,N\rangle=-|\sigma|^2\langle\psi,N\rangle -nH,$ so that
\begin{align*} \Delta F &= 2nH^2(1+H\langle \psi,N\rangle)+2H(-|\sigma|^2\langle \psi,N\rangle -nH)\\
&=2H(nH^2-|\sigma|^2)\langle\psi,N\rangle
\end{align*}
Integrating we get
\begin{equation*} 2H\int_{\Sigma} (nH^2-|\sigma|^2)\langle \psi,N\rangle \,dA = \int_{\Sigma} \Delta F\, dA
=\int_{\partial\Sigma}\frac{\partial F}{\partial\nu}\, ds,
\end{equation*}
that is,
\begin{equation}\label{special function}
H\int_{\Sigma} (nH^2-|\sigma|^2)\langle \psi,N\rangle \,dA = H\int_{\partial\Sigma} (H-\sigma(\nu,\nu))\langle\psi,\nu\rangle\, ds.
\end{equation}
Denote by $H_{\partial\Sigma}$ the mean curvature of the immersion $\psi |_{\partial \Sigma}:\partial \Sigma\longrightarrow \Bbb R^n\times\{0\}$ computed with respect to the unit normal
$\bar\nu.$ We claim that, on $\partial\Sigma:$
\begin{equation}\label{mean curv} H-\sigma(\nu,\nu)=-(n-1)(H+\sin\theta\, H_{\partial\Sigma}).
\end{equation}
Indeed,
let $i\in\{1,\ldots ,k\}$ and $\{v_1,\ldots,v_{n-1}\}$ be a local orthonormal frame on $\partial\Sigma.$ Then:
\begin{equation*}
\sigma(\nu,\nu)=nH-\sum_{j=1}^{n-1} \sigma(v_j,v_j).
\end{equation*}
Now, considering the unit normal $\bar\nu$ to $\partial\Sigma$ in $\Bbb R^n\times\{0\}$ along $\psi$, as chosen in Sec.\ 2, we have $ N=-\sin\theta\,\bar\nu - \cos\theta \, e_{n+1}.$ We can thus write
\begin{equation*}\sigma(v_j,v_j)=-\langle \nabla_{v_j} N, v_j\rangle=\sin\theta\, \langle \nabla_{v_j} \bar\nu, v_j\rangle.
\end{equation*}
Therefore the following relation holds on $\partial\Sigma$
\begin{equation*}\label{eq:sigma}
\sigma(\nu,\nu)=nH+(n-1)\sin\theta\, H_{\partial\Sigma}.
\end{equation*}
This proves (\ref{mean curv}).
Using (\ref{mean curv}) and the fact that $\langle \psi,\nu\rangle =\cos\theta \langle\psi,\bar\nu\rangle, $ we can write (\ref{special function}) as follows:
\begin{equation}\label{middle intergal}H\int_{\Sigma} (nH^2-|\sigma|^2)\langle \psi,N\rangle \,dA= -(n-1)H\cos\theta \int_{\partial\Sigma}(H+\sin\theta \, H_{\partial\Sigma})\langle\psi,\bar\nu\rangle \,ds.
\end{equation}
We claim that
\begin{equation}\label{claim} \int_{\partial\Sigma}(H+\sin\theta \, H_{\partial\Sigma})\langle\psi,\bar\nu\rangle\,ds=0.
\end{equation}
To see this, we first apply Minkowski formula to the immersion $\psi |_{\partial \Sigma}:\partial \Sigma\longrightarrow \Bbb R^n\times\{0\},$ obtaining:
$\int_{\partial\Sigma} H_{\partial\Sigma}\langle\psi,\bar\nu\rangle ds=-|\partial\Sigma|.$ Thus
$$\int_{\partial\Sigma}(H+\sin\theta \, H_{\partial\Sigma})\langle\psi,\bar\nu\rangle\,ds= H\int_{\partial\Sigma}\langle\psi,\bar\nu\rangle -\sin\theta |\partial\Sigma|.$$
Integrating the relation $\Delta \psi =nH N,$ we obtain
\begin{equation*}
\int_{\partial\Sigma} \nu\, ds= nH\int_{\Sigma} N\,dA.
\end{equation*}
Therefore
\begin{equation*}
\int_{\partial\Sigma} \langle \nu,e_{n+1}\rangle \, ds= nH\int_{\Sigma}\langle N,e_{n+1}\rangle\,dA,
\end{equation*}
that is,
\begin{equation}\label{eq5}
-\sin\theta\,|\partial\Sigma|= nH\int_{\Sigma} \langle N,e_{n+1}\rangle\,dA.
\end{equation}
Along
$\partial\Sigma,$ we have $\langle \psi,\nu\rangle=\cos\theta\langle \psi, \bar\nu\rangle$ and $\langle \psi, N\rangle =-\sin\theta \langle \psi, \bar\nu\rangle.$ So
(\ref{eq:normal}) gives
\begin{equation}\label{equation 6}
n\int_{\Sigma} \langle N,e_{n+1}\rangle\,dA=-\int_{\partial\Sigma} \langle \psi, \bar\nu\rangle\,ds.
\end{equation}
(\ref{eq5}) and (\ref{equation 6}) show that
$$H\int_{\partial\Sigma}\langle\psi,\bar\nu\rangle \,ds -\sin\theta |\partial\Sigma|=0.$$
This proves (\ref{claim}). So (\ref{middle intergal}) gives
$$H\int_{\Sigma} (nH^2-|\sigma|^2)\langle \psi,N\rangle=0.$$
Going back to (\ref{eq7}), we get
$$\mathcal I(\phi,\phi)= -\int_{\Sigma} (|\sigma|^2-nH^2) (1+\cos\theta\langle N,e_{n+1} \rangle).$$
As $\theta\in(0,\pi),$ we have $1+\cos\theta\langle N,e_{n+1} \rangle > 0.$ Also
$|\sigma|^2-nH^2\geq 0$ with equality only at umbilics. Hence $\mathcal I(\phi,\phi)\leq 0.$ However, by the stability assumption, $\mathcal I(\phi,\phi)\geq 0.$ We conclude that necessarily $|\sigma|^2-nH^2\equiv 0$ and so
$\psi(\Sigma)$ is totally umbilical, that is, a spherical cap.
\end{proof}
In the above proof we strongly used in our computations that the origin lies on the bounding hyperplane. This explains the hypothesis
in our second result dealing with a domain bounded by a finite family of hyperplanes. It was inspired by the work of Li-Xiong \cite{li-xiong}.
\begin{theorem}
\label{thm:planar boundaries}
Let $\mathcal B$ be a domain in $\Bbb R^{n+1}$ bounded by $k$ hyperplanes $P_1,\ldots, P_k,\, 2\leq k\leq n+1,$ having linearly independent normals. Then there exists a constant $\delta>0$ so that if $\psi:\Sigma\to \mathcal B$ is a stable compact and connected immersed capillary hypersurface in $\mathcal B$ not touching the edges of $\mathcal B$ and having a contact angle $\theta_i\in (\frac{\pi}{2}-\delta
,\frac{\pi}{2}+ \delta)$ with
$P_i,$ then $\psi(\Sigma)$ is a part of a sphere.
\end{theorem}
\begin{proof}
Let $\bf{n_i}$ denote, as above, the unit normal to $P_i$ that points outward $\mathcal B.$ The independence of $\bf{n_1},\ldots, \bf{n_k}$ ensures that $\cap_{i=1}^k P_i\neq \emptyset$ and so we may assume, without loss of generality, that the origin lies in $\cap_i^k P_i.$ Following \cite{li-xiong}, we consider a linear combination $a:=\sum_{i=1}^n c_i{\bf n_i}$
verifying: $\langle a,{\bf{n}}_i \rangle= -\cos\theta_i, \, i=1,\ldots ,k.$ The vector $a$ exists and is unique by the independence of
${\bf{n}}_1,\ldots, {\bf{n}}_k.$ We consider the function
$$\phi= 1+H\langle \psi,N\rangle +\langle a,N\rangle.$$
Proceeding as in the proof of Theorem \ref{thm:half-space}, one shows that
$\int_{\Sigma} \phi \,dA=0,$ so that $\phi$ can be used as a test function, and that
$$\mathcal I(\phi,\phi) = -\int_{\Sigma} (|\sigma|^2-nH^2) \phi\, dA= -\int_{\Sigma} (|\sigma|^2-nH^2) (1+H\langle \psi, N\rangle + \langle a, N\rangle) \, dA.$$
We claim that
\begin{equation}\label{middle intergal2} \int_{\Sigma} (|\sigma|^2-nH^2) H\langle \psi, N\rangle\,dA=0.
\end{equation}
To prove this, write $\partial\Sigma=\cup_{i=1}^k \Gamma_i,$ with $\psi(\Gamma_i)\subset P_i.$ As in the proof of Theorem \ref{thm:half-space}, considering the function $F=|H\psi+N|^2,$ one shows that
\begin{equation}\label{integral 0}
H\int_{\Sigma} (nH^2-|\sigma|^2)\langle\psi,N\rangle \,dA= -(n-1)H \sum_{i=1}^k \cos\theta_i\int_{\Gamma_i}(H+\sin\theta_i)H_{\Gamma_i}\langle\psi,\bar\nu_i\rangle\, ds,
\end{equation}
where $H_{\Gamma_i}$ is the mean curvature of the immersion $\psi |_{\Gamma_i}:\Gamma_i \longrightarrow P_i$ with respect to the unit normal $\bar\nu_i.$ As before, Proposition \ref{prop:normalintegral} together with the relation $\langle \psi,\nu_i\rangle=\cos\theta\langle \psi, \bar\nu_i\rangle$ on $\Gamma_i,$ give
\begin{equation}\label{integral 1}
n\int_{\Sigma} N\,dA= \sum_{i=1}^k \left(\int_{\Gamma_i}\langle\psi,\bar\nu_i\rangle\,dA\right){\bf n_i}.
\end{equation}
Integrating the relation $\Delta\psi=nHN,$ we obtain
\begin{equation*} nH\int_{\Sigma} N\,dA= \sum_{i=1}^k \int_{\Gamma_i} \nu_i\,ds=
\sum_{i=1}^k \int_{\Gamma_i}(\cos\theta_i\,\bar\nu_i+\sin\theta_i\,{\bf n_i})\,ds.
\end{equation*}
By Proposition \ref{prop:normalintegral} applied to the immersion $\psi |_{\Gamma_i:} \Gamma_i\longrightarrow P_i,$ one has $\int_{\Gamma_i}\bar\nu_i\,ds=0.$ So,
\begin{equation}\label{integral 2}
nH\int_{\Sigma} N\,dA= \sum_{i=1}^k \sin\theta_i |\Gamma_i|{\bf n_i}.
\end{equation}
From (\ref{integral 1}), (\ref{integral 2}) and the independence of the ${\bf n_i}$'s, we infer that
\begin{equation}\label{integral 3} \int_{\Gamma_i}H\langle\psi,\bar\nu_i\rangle\,ds =\sin\theta_i |\Gamma_i|.
\end{equation}
Applying Minkowski formula to the immersion $\psi |_{\Gamma_i}: \Gamma_i\longrightarrow P_i,$ we get
\begin{equation}\label{integral 4}
|\Gamma_i|= -\int_{\Gamma_i} H_{\Gamma_i} \langle \psi,\bar\nu_i\rangle\,ds.
\end{equation}
(\ref{integral 3}) and (\ref{integral 4}) give
$$\int_{\Gamma_i}(H+\sin\theta_iH_{\Gamma_i})\langle\psi,\bar\nu_i\rangle\, ds=0.$$
Together with (\ref{integral 0}) this proves (\ref{middle intergal2}).
Summarizing, we have proved that
$$\mathcal I(\phi,\phi) = -\int_{\Sigma} (|\sigma|^2-nH^2) (1 + \langle a, N\rangle) \, dA.$$
As in the proof of Theorem \ref{thm:half-space}, we can conclude that $\psi(\Sigma)$ is totally umbilical provided $|a|<1.$
Now, recall that $a$ is the unique solution, in the linear space spanned by ${\bf n_1,\dots,n_k},$ to the system of linear equations $\langle a,{\bf n_i}\rangle=-\cos\theta_i, \,i=1,\ldots,k,$
and depends continuously on the $\theta_i$'s. When $\theta_i=\frac{\pi}{2},$ for $ i=1,\ldots,k,$ the solution to the system is $a=0.$ Therefore
there exists $\delta>0$ such that if $\theta_i\in (\frac{\pi}{2}-\delta,\frac{\pi}{2}+\delta), i=1,\ldots,k,$ then $|a|<1$ and $\psi(\Sigma)$ is totally umbilical. Note that $\psi(\Sigma)$ cannot be planar because otherwise it would not be disjoint from the edges of $\mathcal B.$ Therefore $\psi(\Sigma)$ is a piece of a sphere.
\end{proof}
Conversely, a part of a sphere as above is stable and even a minimizer of the energy functional $E$ among all embedded hypersurfaces in
$\mathcal B$ enclosing the same volume, see \cite{zia et al.}.
The free boundary case, that is, when $\theta_i=\frac{\pi}{2},$ for each $i=1,\ldots,k,$ was treated by Lopez \cite{lopez}. The only possibility for a piece of a sphere to meet orthogonally the hyperplanes $P_i$ is to be centered at a point in the intersection $\cap_{i=1}^k P_i.$
The conclusion of Theorem \ref{thm:planar boundaries} is therefore that a stable hypersurface with free boundary has to touch some of the edges of $\mathcal B.$ | 0.011354 |
\begin{document}
\title{Essential Domains and Two Conjectures in Dimension Theory}
\author[M. Fontana]{M. Fontana}
\address{Dipartimento di Matematica, Universit\`a degli Studi ``Roma Tre'', 00146 Roma, Italy}
\email{fontana@mat.uniroma3.it}
\author[S. Kabbaj]{S. Kabbaj}
\address{Department of Mathematics, P.O. Box 5046, KFUPM, Dhahran 31261, Saudi Arabia}
\email{kabbaj@kfupm.edu.sa}
\subjclass[2000]{Primary 13C15, 13F20, 13F05, 13G05, 13B02, 13B30}
\thanks{The first author was partially supported by a research grant
MIUR 2001/2002 (Cofin 2000-MM01192794). The second author was supported by the Arab Fund for Economic
and Social Development}
\thanks{This work was done while both authors were visiting Harvard University}
\keywords{Krull dimension, valuative dimension, Jaffard domain, integer-valued polynomial
ring, essential domain, Krull domain, UFD, PVMD, Kronecker function ring, star operation}
\begin{abstract} This note investigates two long-standing conjectures on the Krull dimension
of integer-valued polynomial rings and of polynomial rings, respectively, in the context of
(locally) essential domains.
\end{abstract}
\maketitle
\section{Introduction}
Let $R$ be an integral domain with quotient field $K$ and let
$\Int(R):=\{f \in K[X]: f(R) \subseteq R\}$ be the ring of
integer-valued polynomials over $R$. Considerable work, part of it
summarized in Cahen-Chabert's book \cite{CC2}, has been concerned
with various aspects of integer-valued polynomial rings. A central
question concerning $\Int(R)$ is to describe its prime spectrum
and, hence, to evaluate its Krull dimension. Several authors
tackled this problem and many satisfactory results were obtained
for various classes of rings such as Dedekind domains
\cite{Ch1,Ch2}, Noetherian domains \cite{Ch2}, valuation and
pseudo-valuation domains \cite{CH}, and pseudo-valuation domains
of type $n$ \cite{T}. A well-known feature is that $\dim(R[X])-1
\leq \dim(\Int(R))$ for any integral domain $R$ \cite{Ca1}.
However, the problem of improving the upper bound
$\dim(\Int(R))\leq \dim_{v}(R[X])=\dim_{v}(R)+1$ \cite{FIKT1},
where $\dim_{v}(R)$ denotes the valuative dimension of $R$, is
still elusively open in general. It is due, in part, to the fact
that the fiber in $\Int(R)$ of a maximal ideal of $R$ may have
any dimension \cite[Example 4.3]{Ca1} (this stands for the main
difference between polynomial rings and integer-valued polynomial
rings). Noteworthy is that all examples conceived in the
literature for testing $\dim(\Int(R))$ satisfy the inequality
$\dim(\Int(R)) \leq \dim(R[X])$. In \cite{FIKT1,FIKT2}, we
undertook an extensive study, under two different approaches, in
order to grasp this phenomenon. We got then further evidence for
the validity of the conjecture:
\begin{center}
($\mathcal{C}_{1}$)\quad {\em $\dim(\Int(R))\leq \dim(R[X])$ for any integral domain $R$}.
\end{center}
The current situation can be described as follows; ($\mathcal{C}_{1}$) turned out to be
true in three large -presumably different- classes of commutative rings; namely: (a)
Krull-type domains, e.g., unique factorization domains (UFDs) or Krull domains \cite{Gr2,FIKT2};
(b) pseudo-valuation domains of type $n$ \cite{FIKT2}; and (c) Jaffard domains \cite{FIKT1,Ca1}.
A finite-dimensional domain $R$ is said to be Jaffard if
$\dim(R[X_{1}, ..., X_{n}]) = n + \dim(R)$ for all $n \geq 1$;
equivalently, if $\dim(R) = \dim_{v}(R)$ \cite{ABDFK,BK,DFK,G,J}. The
class of Jaffard domains contains most of the well-known classes of
finite-dimensional rings involved in dimension theory of commutative rings such as
Noetherian domains \cite{K}, Pr\"ufer domains \cite{G}, universally catenarian domains
\cite{BDF}, stably strong S-domains \cite{Kab,MM}. However, the question of establishing
or denying a possible connection to the family of Krull-like domains is still unsolved. In
this vein, Bouvier's conjecture (initially, announced during a 1985 graduate course at the
University of Lyon I) sustains that:
\begin{center}
($\mathcal{C}_{2}$)\quad {\em finite-dimensional Krull domains, or
more particularly UFDs, need not be Jaffard domains}.
\end{center}
As the Krull property is stable under adjunction of indeterminates,
the problem merely deflates to the existence of a Krull domain $R$
with \(1+\dim(R)\lneqq \dim(R[X])\). It is notable that the rare
non-Noetherian finite-dimensional UFDs or Krull domains existing in
the literature do defeat ($\mathcal{C}_{2}$), since all are Jaffard
\cite{AM,Da1,Da2,DFK,Fu,G2}. So do the examples of non-Pr\"ufer
finite-dimensional Pr\"ufer $v$-multiplication domains (PVMDs)
\cite{Gr1,HMM,MZ,Z}; as a matter of fact, these, mainly, arise as
polynomial rings over Pr\"ufer domains or as pullbacks, and both
settings either yield Jaffard domains or turn out to be inconclusive
(in terms of allowing the construction of counterexamples) \cite{ABDFK,FG}. In order to
find the missing link, one has then to dig beyond the context of PVMDs.
Essential domains happen to offer such a suitable context for ($\mathcal{C}_{2}$) as
well as a common environment for both conjectures ($\mathcal{C}_{1}$) and ($\mathcal{C}_{2}$),
though these have developed in two dissimilar milieus. An integral domain $R$ is said to be
{\em essential} if $R$ is an intersection of valuation rings that are localizations of
$R$ \cite{Gr3}. As this notion does not carry up to localizations, $R$
is said to be
{\em locally essential} if $R_{p}$ is essential for each $p\in
\Spec(R)$. Notice that the locally essential domains correspond
to the $P$-domains in the sense of Mott and Zafrullah \cite{MZ}. PVMDs and almost Krull
domains
\cite[p. 538]{G} are perhaps the
most important examples of locally essential domains. Recall that Heinzer
constructed in \cite{H2} an example of an essential domain
that is not locally essential. Also, it is worth noticing that Heinzer-Ohm's example
\cite{HO} of an essential domain which is not a PVMD is, in fact, locally
essential (cf. \cite[Example 2.1]{MZ}). Finally recall that a Krull-type
domain is a PVMD in which no non-zero
element belongs to an infinite number of maximal $t$-ideals
\cite{Gr1}. We have thus the following implications within the family of
Krull-like domains:\medskip
\begin{center}
\begin{tabular}{rcl}
&UFD\\
&$\downarrow$\\
&Krull\\
$\swarrow$ & &$\searrow$\\
Krull-type & &Almost Krull\\
\hfill\hfill $\downarrow$ \hfill\hfill\\
PVMD & &$\swarrow$\\
$\searrow$\\
&Locally Essential\\
&$\downarrow$\\
& Essential
\end{tabular}
\end{center}
The purpose of this note is twofold. First, we state a result
that widens the domain of validity of ($\mathcal{C}_{1}$) to the
class of locally essential domains. It is well-known that
($\mathcal{C}_{1}$) holds for Jaffard domains too
\cite{FIKT1,Ca1}. So one may enlarge the scope of study of
($\mathcal{C}_{2}$) -discussed above- and legitimately raise the
following problem:
\begin{center}
($\mathcal{C'}_2$)\quad {\em Is any finite-dimensional (locally)
essential domain Jaffard?}
\end{center}
Clearly, an affirmative answer to ($\mathcal{C'}_2$) will
definitely defeat ($\mathcal{C}_2$); while a negative answer will
partially resolve ($\mathcal{C}_2$) for the class of (locally)
essential domains. Our second aim is to show that the rare
constructions of non-trivial (locally) essential domains (i.e.,
non-PVMD) existing in the literature yield Jaffard domains,
putting therefore ($\mathcal{C'}_2$) under the status of open problem.
Consequently, a settlement of ($\mathcal{C}_2$) seems -at present-
out of reach.
\section{Result and example}
In the first part of this section, we establish the following result.
\begin{theorem}
For any locally essential domain $R$, $\dim(\Int(R))=\dim(R[X])$.
\end{theorem}
\begin{proof} Assume that $R$ is finite-dimensional and $R\not=K$,
where $K$ denotes the quotient field of $R$.
Let \(R=\bigcap_{p\in\Delta}R_{p}\) be a locally essential domain,
where $\Delta\subseteq\Spec(R)$. Set:
\(\begin{array}{ll}
\Delta_{1}:= &\{p\in\Delta: R_{p}\ \textup{is a DVR}\}\\
\Delta_{2}:= &\{p\in\Delta: R_{p}\ \textup{is a valuation domain but
not a DVR}\}.
\end{array}\)\\
We wish to show first that $\dim(\Int(R))\leq\dim(R[X])$. Let $M$ be a
maximal ideal of $\Int(R)$ such that $\dim(\Int(R))=\htt(M)$ and let
$\mathcal{M}:= M\cap R$. Without loss of generality, we may assume
that $\mathcal{M}$ is maximal in $R$ with a finite residue field. We
always have $R_{\mathcal{M}}[X]\subseteq (\Int(R))_{\mathcal{M}}\subseteq
\Int(R_{\mathcal{M}})$ \cite[Corollaires (4), p. 303]{CC1}. If
$\mathcal{M}\in\Delta_{1}$, then $R_{\mathcal{M}}[X]$ is a two-dimensional
Jaffard domain \cite[Theorem 4]{S2} and \cite[Proposition 1.2]{ABDFK}. So the
inclusion $R_{\mathcal{M}}[X]\subseteq(\Int(R))_{\mathcal{M}}$ yields
$\dim((\Int(R))_{\mathcal{M}})\leq \dim_{v}(R_{\mathcal{M}}[X])=\dim(R_{\mathcal{M}}[X])$.
Thus, $\dim((\Int(R))_{\mathcal{M}})=\dim(R_{\mathcal{M}}[X])=2$. If
$\mathcal{M}\in\Delta_{2}$, then $\Int(R_{\mathcal{M}})=R_{\mathcal{M}}[X]=
(\Int(R))_{\mathcal{M}}$ \cite[Exemples (5), p. 302]{CC1}. If $\mathcal{M}\notin\Delta$,
then \(R_{\mathcal{M}}=\bigcap_{p\in\Delta, p\subsetneqq\mathcal{M}}R_{p}\) since $R$ is
a locally essential domain. So that by \cite[Corollaires (3), p. 303]{CC1}
$\Int(R_{\mathcal{M}})=\bigcap_{p\in\Delta, p\subsetneqq\mathcal{M}}\Int(R_{p})
=\bigcap_{p\in\Delta, p\subsetneqq\mathcal{M}}R_{p}[X]=R_{\mathcal{M}}[X]=
(\Int(R))_{\mathcal{M}}$. In all cases, we have $\dim(\Int(R))=
\dim((\Int(R))_{\mathcal{M}})=\dim(R_{\mathcal{M}}[X])\leq\dim(R[X])$, as desired.
We now establish the inverse inequality
$\dim(R[X])\leq\dim(\Int(R))$. Let $M$ be a maximal ideal of $R[X]$ such that $\dim(R[X])=\htt(M)$, and
$\mathcal{M}:= M\cap R$. Necessarily, $\mathcal{M}$ is maximal in
$R$. Further, we may assume that $\mathcal{M}$ has a finite residue
field. If $\mathcal{M}\in\Delta_{2}$ or $\mathcal{M}\notin\Delta$,
similar arguments as above lead to
$R_{\mathcal{M}}[X]=(\Int(R))_{\mathcal{M}}$ and hence to the
conclusion. Next, suppose $\mathcal{M}\in\Delta_{1}$. Then,
$\Int(R_{\mathcal{M}})$ is a two-dimensional (Pr\"ufer) domain \cite{Ch3}.
Let $(0)\subsetneqq P_{1}\subsetneqq P_{2}$ be a maximal chain of prime ideals
in $\Spec(\Int(R_{\mathcal{M}}))$. Clearly, it contracts to
$(0)\subsetneqq \mathcal{M}R_{\mathcal{M}}$ in $\Spec(R_{\mathcal{M}})$.
Further, by \cite[Corollaire 5.4]{Ca1}, $P_{1}$ contracts to $(0)$. Therefore,
by \cite[Proposition 1.3]{Ch2}, $P_{1}=fK[X]\cap\Int(R_{\mathcal{M}})$, for some
irreducible polynomial $f\in K[X]$. This yields in $\Spec(R_{\mathcal{M}}[X])$ the maximal chain:
\[(0)\subsetneqq P_{1}\cap R_{\mathcal{M}}[X]=fK[X]\cap
R_{\mathcal{M}}[X]\subsetneqq P_{2}\cap R_{\mathcal{M}}[X],\]
which induces in $\Spec(\Int(R))_{\mathcal{M}})$ the following maximal chain:
\[(0)\subsetneqq P_{1}\cap \Int(R))_{\mathcal{M}}=
fK[X]\cap \Int(R))_{\mathcal{M}}\subsetneqq P_{2}\cap \Int(R))_{\mathcal{M}}.\]
Consequently, in all cases, we obtain
$\dim(R[X])=\dim(R_{\mathcal{M}}[X])=\dim((\Int(R))_{\mathcal{M}})$ $\leq\dim(\Int(R))$,
to complete the proof of the theorem.
\end{proof}
>From \cite[Proposition 1.8]{HO} and \cite[Exercise 11, p. 539]{G} we
obtain the following.
\begin{corollary}
Let $R$ be a PVMD or an almost Krull domain. Then $\dim(\Int(R))=\dim(R[X])$.
\end{corollary}
In the second part of this section, we test the problem
($\mathcal{C'}_{2}$) -set up and discussed in the introduction-
for the class of non-PVMD (locally) essential domains. These occur
exclusively in Heinzer-Ohm's example \cite{HO} and Heinzer's
example \cite{H2} both mentioned above. The first of which is a
2-dimensional Jaffard domain \cite[Example 2.1]{MZ}. Heinzer's
example \cite{H2}, too, is a 2-dimensional Jaffard domain by
\cite[Theorem 2.3]{DFK}. Our next example shows that an
enlargement of the scope of this construction -still- generates a
large family of essential domains with nonessential localizations
of arbitrary dimensions $\geq 2$ -but unfortunately- that are
Jaffard domains.
\begin{example}\label{3.1} For any integer $r\geq2$, there
exists an $r$-dimensional essential Jaffard domain $D$ that is not locally essential.
\end{example}
Notice first that the case $r=2$ corresponds to Heinzer's example
mentioned above. In order to increase the dimension, we modify
Heinzer's original setting by considering Kronecker function rings
via the $b$-operation. For the sake of completeness, we give below
the details of this construction.\medskip
Let $R$ be an integral domain, $K$ its quotient field, $n$ a
positive integer (or $n=\infty$), and $X,X_{1}, ..., X_{n}$
indeterminates over $K$. The $b$-operation on $R$ is the a.b. star
operation defined by $I^{b}:=\bigcap\{IW: W\ \textup{is a
valuation overring of}\ R\}$, for every
fractional ideal $I$ of $R$. Throughout, we shall use $\Kr_{K(X)}(R, b)$ to denote the Kronecker
function ring of $R$ defined in $K(X)$ with respect to the $b$-operation on $R$; and $R(X_{1}, ..., X_{n})$
to denote the Nagata ring associated to the polynomial ring $R[X_{1}, ..., X_{n}]$, obtained by
localizing the latter with respect to the multiplicative system consisting of polynomials
whose coefficients generate $R$.
Let $r$ be an integer $\geq 2$. Let $k_{0}$ be a field and
$\{X_{n}: n\geq 1\}$, $Y$, $\{Z_{1}, ..., Z_{r-1}\}$ be
indeterminates over $k_{0}$. Let $n$ be a positive integer. Set:
\[\begin{array}{lll}
k_{n}:=k_{0}(X_{1}, ..., X_{n}) &; & k:=\bigcup_{n\geq 1}k_{n}\\
F_{n}:=k_{n}(Z_{1}, ..., Z_{r-1}) &; & F:=\bigcup_{n\geq 1}F_{n}=k(Z_{1}, ..., Z_{r-1})\\
K_{n}:=F_{n}(Y) &; & K:=\bigcup_{n\geq 1}K_{n}=F(Y)\\
M_{n}:=YF_{n}[Y]_{(Y)} &; & M:=\bigcup_{n\geq 1}M_{n}=YF[Y]_{(Y)} \\
A_{n}:= k_{n}+M_{n} &; & A:=\bigcup_{n\geq 1}A_{n}=k+M\\
V_{n}:=F_{n}[Y]_{(Y)} &; & V:=\bigcup_{n\geq
1}V_{n}=F[Y]_{(Y)}.
\end{array}\]
Note that, for each $n\geq 1$, $V$ and $V_{n}$ (resp., $A$ and
$A_{n}$) are one-dimensional discrete valuation domains (resp.,
pseudo-valuation domains) and $\dim_{v}(A)=\dim_{v}(A_{n})$ $=r$.
For each $n\geq 1$, set $X'_{n}:=\frac{1+YX_{n}}{Y}$. Clearly,
$K_{n}=K_{n-1}(X_{n})=K_{n-1}(X'_{n})$. Next, we define
inductively two sequences of integral domains
$\big(B_{n}\big)_{n\geq 2}$ and $\big(D_{n}\big)_{n\geq 1}$ as
follows:
\[\begin{array}{lll}
&;& D_{1}:= A_{1}\\
B_{2}:=\Kr_{K_{1}(X'_{2})}(D_{1},b) &;& D_{2}:=B_{2}\cap A_{2}\\
B_{n}:=\Kr_{K_{n-1}(X'_{n})}(D_{n-1},b) &;& D_{n}:=B_{n}\cap
A_{n},\textup{ for } n\geq 3.
\end{array}\]
For $ n\geq 2$, let ${\mathcal M}_{n} := M_{n} \cap D_{n}\ (
\subset D_{n} = B_{n}\cap A_{n} \subseteq A_{n}$), where $M_{n}$
is the maximal ideal of $A_{n}$.
\begin{claim} \label{3.4}
(1) $B_{n}$ is an $r$-dimensional Bezout domain.\\
(2) \(B_{n}\cap K_{n-1}=D_{n-1}\subseteq A_{n-1}=A_{n}\cap K_{n-1}\). \\
(3) \(D_{n}\cap K_{n-1}=D_{n-1}\).\\
(4) \(D_{n}[X'_{n}]=B_{n}\) and \((D_{n})_{{\mathcal
M}_{n}}=D_{n}[\frac{1}{YX'_{n}}]=A_{n}\),
with $\frac{1}{X'_{n}}$ and $YX'_{n}\in D_{n}$. \\
(5) ${\mathcal M}_{n}$ is a height-one maximal ideal of $D_{n}$
with
${\mathcal M}_{n}\cap K_{n-1}={\mathcal M}_{n-1}$.\\
(6) For each $q\in \Spec(D_{n})$ with $q\not={\mathcal M}_{n}$
there exists a unique
prime ideal $Q\in \Spec(B_{n})$ contracting to $q$ in $D_{n}$ and \((D_{n})_{q}=(B_{n})_{Q}\).\\
(7) \(B_{n}=\bigcap\{(D_{n})_{q}: q\in \Spec(D_{n})\ \textup{and}\ q\not={\mathcal M}_{n}\}\).\\
(8) For each $q'\in \Spec(D_{n-1})$ with $q'\not={\mathcal
M}_{n-1}$ there exists
a unique prime ideal $q\ (\not={\mathcal M}_{n})\in \Spec(D_{n})$ contracting to
$q'$ in $D_{n-1}$ such that \((D_{n})_{q}=(D_{n-1})_{q'}(X'_{n})\).
\end{claim}
\begin{proof} Similar arguments as in \cite{H2} lead to (1)-(7).\\
(8) By (7), $B_{n-1}\subseteq(D_{n-1})_{q'}$, and hence
$(D_{n-1})_{q'}$ is a valuation domain in $K_{n-1}$ of dimension
$\leq r$ containing $D_{n-1}$. Since $B_{n}$ is the Kronecker
function ring of $D_{n-1}$ defined in $K_{n-1}(X'_{n})$ by all
valuation overrings of $D_{n-1}$, then $(D_{n-1})_{q'}$ has a
unique extension in $K_{n-1}(X'_{n})$, which is a valuation
overring of $B_{n}$, that is, $(D_{n-1})_{q'}(X'_{n})$. By (7),
the center $q$ of $(D_{n-1})_{q'}(X'_{n})$ in $D_{n}$ is the
unique prime ideal of $D_{n}$ lying over $q'$ and
\((D_{n})_{q}=(D_{n-1})_{q'}(X'_{n})\).
\end{proof}
Set $D:=\bigcup_{n\geq 1}D_{n}$ and ${\mathcal M}:=\bigcup_{n\geq
2}{\mathcal M}_{n}$. It is obvious that $D \subseteq
A=\bigcup_{n\geq 1}A_{n}$.
\begin{claim} \label{3.5} $D_{\mathcal M}=A$ and ${\mathcal M}$ is a height-one maximal
ideal in $D$.
\end{claim}
\begin{proof}
It is an easy consequence of Claim \ref{3.4}(4), since
${\mathcal M}_{n} ={\mathcal M}\cap D_{n}$, for each $n$.
\end{proof}
Let $q\in \Spec(D)$ with $q\not={\mathcal M}$. Then, for some
$m\geq 2$, we have in $D_{m}$, $q_{m}:=q\cap D_{m}\not={\mathcal
M}_{m} ={\mathcal M}\cap D_{m}$. So, by Claim \ref{3.4}(6),
$B_{m}\subseteq(D_{m})_{q_{m}}$, hence $(D_{m})_{q_{m}}$ is a
valuation overring of $B_{m}$ of dimension $\leq r$, whence, by
Claim~\ref{3.4}(8), $D_{q}=\bigcup_{n\geq
1}(D_{n})_{q_{n}}=(D_{m})_{q_{m}}(X'_{m+1}, ...)$ is still a
valuation domain of dimension $\leq r$.
\begin{claim} \label{3.6}
$D=\bigcap\{D_{q}: q\in \Spec(D)\ \textup{and}\ q\not={\mathcal
M}\}$.
\end{claim}
\begin{proof}
Similar to \cite{H2}.
\end{proof}
>From Claims \ref{3.5} and \ref{3.6} we obtain:
\begin{conclusion}
$D$ is an essential domain with a nonessential localization and
$\dim(D)=\dim_{v}(D)=r$.
\end{conclusion} | 0.001165 |
If you're looking to reduce waste and save some money on your Thanksgiving decor, what better way to pull it off than by decorating with trash? To help you deck your halls the eco way, Earth911 rounded up these 11 creative decor ideas made from waste items you already have.
Homepage Image: Laura Russell/Make Life Lovely | 0.638921 |
The Lessons Learned in Watching a Loved-One Die July 24, 2018 1 Comment via The Lessons Learned in Watching a Loved-One Die Share this:EmailTwitterTumblrLinkedInFacebookPinterestLike this:Like Loading... WRITING
This is beautiful, sad, and complex. In the few experiences I have had with dying loved ones, rarely is the decision about how to proceed as simple or straightforward as we wish it would be, even with advanced directives and previous discussions in place. As with this lady, it’s very challenging to appropriately meet the needs of someone whose cognition is changing radically, and not on all levels at once. It’s a humbling thing. | 0.003821 |
TITLE: collapsing a subspace is the same thing as attaching a cone of this subspace and then collapsing this cone
QUESTION [1 upvotes]: Let $(X,A)$ be a "good" CW pair. Let $*\in A \subset X$ be the base point of $
X$ and $CA$ the cone on $A$ i want to show that
$(X\cup CA)/CA$ is homeomorphic to $X/A$. I can see it geometrically but i want to prove it. Consider
the composite
$$f:X\cup CA \to X \to X/A $$ where the first map sends $x$ to $x$ and $(a,t)\in CA$ to $a$ and the second map is the quotient map. $f$ is a surjective map sending $CA$ to the base point of $X/A$ which is the class of $*$ identified with $A$ so $f$ factorizes through $(X\cup CA)/CA$ and the induced map
$${\tilde f}:(X\cup CA)/CA\to X/A$$ is a homeomorphism. Is this correct?
REPLY [1 votes]: Following your argument you only get a continuous bijection and you will have to prove that your map's inverse is continuous.
You can see this by doing the same thing of what you did from the other direction with the map
$$g: X \rightarrow X \cup CA \rightarrow (X \cup CA) /CA $$
which is inclusion followed by the quotient map, hence continuous. Obviously it is surjective. As in your argument, $g(A)$ is the class of $*$, so you get an induced map $$\tilde{g}: X/A \rightarrow (X \cup CA) /CA$$
and that is your continuous inverse.
It seems to me that $(X,A)$ does not have to be a CW pair for this to work. | 0.008618 |
TITLE: complex analysis differentiation and existence of a point?
QUESTION [2 upvotes]: If $f(z) = z^3$
prove that there is no point $c$ on line segment $[1,i]$
s.t. $(f(i)-f(1)) / (i-1) = f'(c)$.
So differentiating:
$$f'(c) = 3c^2$$
$$3c^2 = (f(i)-f(1))/(i-1) = (-i-1)/(i-1) = i$$
Hence $c = \sqrt{i/3}$.
Am i doing this right?
Could anyone also clarify what the line segment $[1,i]$ means?
Is it the diagonal line from the real axis $1$ to the Im axis $i$?
REPLY [1 votes]: You need to be aware that $3c^2=i$ has two solutions, and you need to show that neither one is on the line segment from $[1,i]$.
Lines in the complex plane are usually described in terms of a parametric equation. For any two distinct complex numbers $a$ and $b$, the line through both is parameterized as
$$a+(b-a)t=(1-t)a+bt, t\in\Bbb R$$.
Note that for $t=0$, we get $a$, and for $t=1$ we get $b$. So, do this for $a=1$ and $b=i$.
As you say, this is the diagonal line from the point $1$ on the real axis to $i$ in the imaginary axis. | 0.035905 |
Need To Improve Your Search Engine Optimization? Employ These Ideas For Better Rankings!
SEO is the process where a webmaster tweaks his or her site around to get the highest search engine ranking. It’s one of the best ways to assure your website’s success. Some people treat SEO as rarefied knowledge that no ordinary person can learn. Do not listen to these people!
Pay-per-click is an effective way to utilize affiliate marketing tactics on your page. Although the profits start small, they can rapidly increase based on affiliate referrals..
After you carefully determine which key-phrases you will “sprinkle” throughout your website, make sure to include them in your web page.
Use a site map to help boost traffic to your website. This will make all of your pages accessible from each other. Visitors will utilize the links you have created and this will effectively increase traffic to your site.
Put those keywords into your URLs! If that URL has symbols and numbers that people probably won’t search for, then that page may not rank as highly in the search engines. Including relevant keywords improves a page’s traffic.
Blogging on your own website will increase traffic because it will be more visible to search engines. There will be more people visiting your site than ever.
Search Engines
Proofread content moves products, but many webmasters forget this critical step. Take the time to see that visitors and the search engines can comprehend the information on your site. If your content is poorly written and is full of spelling and grammatical errors, your website will not rank well by search engines, if at all.
Always try and generate new content as often as you can. Set a weekly goal for yourself, and make sure to stick to it. Websites that show the ability to generate an ever changing supply of unique content receive higher marks from search engines than sites with static material. Sites with fresh content tend to have higher search engine rankings.
Don’t duplicate any content on your pages. Know that it is quite possible to use duplicate content without even knowing it. Using the same description is easy, but could be flagged as spam.
One of the latest ways of getting information out is through podcasts. A podcast consists of informational content delivered in audio or video form, sometimes live, and they always should contain relevant information based on the topic of your show. Podcasts are skyrocketing in popularity and are remarkably easy to create. Doing this will allow the description of your podcast to appear.
Invest in adbrite, adwords or other advertising options. SEO is helpful, but bringing in traffic may require a financial investment. These advertisements will increase your views. Using products such as those from Google can make a huge difference.
Javascript can be used, but some search engines overlook it. In other words, Java is something you can choose, but because of the uniqueness of the script. It is possible that web crawlers will not pick up on the site the way you might expect.
You can make your site more visible to search engines by getting local listings on Google and Yahoo. Utilizing free listing services will help you increase traffic and search engine rankings. You should never neglect to use a free or low cost opportunity to advertise your website.
When working on improving search engine optimization, it is crucial to take advantage of social media sites. Both Facebook and Twitter are great for interacting with customers, while YouTube is perfect for product demonstrations and other videos.
Search Engine
When writing for search engine results, you can profit from using keywords. When you are setting goals for search engine optimization, be mindful that you are not just writing for search engine spiders, you are writing for humans as well.
Search engine optimization requires patience and consistency. You want to get rewarded and see positive results for your hard work. Building an online presence will take time and effort..
Search engine optimization, or SEO for short, is a type of marketing that can boost your business to the next level. The key is to use words that will generate a higher ranking for your site. This will guide people, searching for your products or services, directly to your virtual doorstep.
One tool that you must have is Google Analytics. This program will be instrumental in helping you to see how your SEO is progressing and help you learn how to improve your search engine ranking. You can find out exactly which keywords are helping to bring traffic by using this tool. You can then modify your website to give more focus to these keywords.
While a lot of SEO is done by professional marketers, even beginners can get involved. Some of the aforementioned tools can get you started with SEO. Excellent website traffic is just a hop, skip and a jump away! | 0.995465 |
- The special hard coating on the rivet holes and the drive links gives the chains low chain elongation.
- Re-tensioning of chains is required less regularly due to the rivet holes and drive links.
- The chains’ special layer structure gives them a harder coating on the outside than on the inside.
- This makes the surface layer harder than the cut material particles, reducing wear during the cutting process.
- The chain’s material characteristics maintain elasticity. | 0.475211 |
For some reason they call this coupon a "gift certificate" but it's really just a coupon for $10 off of any purchase over $50. It's good only at the Portland/Clackamas location on SE 82nd and must be used by September 3rd. Sadly, I'm broke at the moment but I want to see someone put this to good use. If you want it, let me know when you can pick it up. I live near SE 82nd and Duke so I'm on the way. | 0.002571 |
Call 020 3551 4525 sales@designcolour.com
Designcolour provides the highest quality of embroidery and printing onto polo-shirts, t-shirts, sweatshirts, fleeces, jackets, caps and beanies as well as many other uniform clothing items. We are one of London’s premier embroidery companies producing hundreds of quality embroidery garments every day.
Designcolour employs the latest computer-controlled embroidery machines coupled with the highest quality threads to produce a highly professional finish, with all work being checked for quality before being despatched to our customers.
By using embroidered uniforms for your staff you are sending a true image of quality to your customers ensuring that your staff appear smart and presentable. Also by choosing embroidery you are guaranteeing the decoration will last the life of the garment.
Designcolour is an established family run embroidery business which provides customers with a vast range of garments and promotional accessories, at the most competitive rates, whatever the occasion. We offer the fastest lead times and a personal service to each and every one of our valued customers.
Our customer base includes a wide range of promotional and marketing companies, sports and leisure clubs, schools and school shops for school badge embroidery, colleges and universities as well as a wide range of corporate companies which are both local and national.
We also carry out numerous commission-based jobs for a variety of specialised uniform providers on ready-made garments as well as specially treated fabric panels which are to be used for customised corporate clothing.
A complete embroidered badge and epaulette making service is also offered from sew-on badges to Velcro and iron-on badges supplied to your size requirements.
Embroidery can be executed on a comprehensive range of garments including leisure wear, work wear as well as formal clothing, all of which can be supplied from our extensive range of stock, and can be viewed from our on-line brochure. Requirements for customised garments can also be sourced from various national and global contacts.
Your designs can be produced accurately and quickly. Custom designs can also be created at your request. Individual quotes can be offered within 24 hours, with samples delivered within 72 hours and typical lead times being 7-10 working days.
For promotional or corporate events your requirements can be serviced quickly and efficiently by our friendly and helpful staff, who are ready to discuss your ideas with you. Please call us for FREE using our Click to call button or call 020 8838 6611. Alternatively you can email us using the form on the Contact Us page or simply email sales@designcolour.com. We look forward to hearing from you. | 0.408456 |
Rising Seas Will Swamp Homes, Says Harrabin
By Paul Homewood
More overhyped nonsense from Roger Harrabin:
Eng.
It estimates that by the 2080s, up to 1.2 million homes may be at increased risk from coastal floods.
Let’s look at the CCC report itself:
It starts by saying the obvious, that coastal communities already face threats from coastal flooding and erosion. They quantify this:
520,000 properties sounds a lot, but these are in areas with a risk of only 0.5%, ie once every 200 years. There is also does not appear to be any quantification of how severe such flooding might be.
As the report does point out, coastal flooding only occurs when the “sea water level is extremely high”, mainly during storm surges. By definition, these are very short lived and low impact events, certainly not in the same league as inland floods.
The BBC’s headline about “swamping homes” is irresponsible alarmism, that has more in common with “The Day After Tomorrow” than fact. Having a few inches of sea water outside your front door for a couple of hours is hardly Armageddon.
What is also notable is the the extremely low number of properties at risk from coastal erosion, despite the apocalyptic stories we often hear. Losing your home over a cliff edge is something nobody would want, but all 8900 households could be re-homed at a tiny annual cost, if spread over a decade or two.
But then we get to the nitty gritty:
In evidence, the CCC begin by showing the Woodworth et al graph.
Woodworth states that since 1901, the rate of sea level rise has been 1.4mm/yr, after allowing for land movement.
The rate of rise has not accelerated over the period as a whole, and sea levels have actually dropped in recent years (something not uncommon).
The 1m of sea level rise predicted by the CCC would take 714 years at this rate. Of course, some parts of England are sinking, but there is nothing the CCC can do about this anyway.
So where does the CCC get it’s “1m in 80 years” from, something that would mean sea level rise immediately increasing from 1.4mm to 12.5mm/yr?
It is in fact based on the much criticised RCP8.5 scenario, which assumes high-end emissions of GHGs.
RCP 8.5 has long been dismissed by scientists as being unrealistically high, even if there is no international agreement to cut emissions. The central temperature rise for RCP 8.5 has also been dismissed as unrealistic as well.
The more alarming projections also involve one outlier study by DeConto & Pollard. In contrast, most of the IPCC AR5 projections only represent a rise of about half a meter by the end of the century, something that is still well above the current rate of rise. Even IPCC’s RCP 8.5 only estimates sea level rise of 0.74m by 2100.
Even under the worst scenario of a 4C rise in global temperatures, the CCC only reckon that the number of properties at risk will triple. But this assumes that there will be no adaptation.
Given that we have a century or more to prepare for such an eventuality, it is hard to see that we would just sit back and watch the tide come in.
In practice, flood defences will be strengthened, areas close to the sea will be not be redeveloped, and instead new homes will be built on higher ground, maybe just a few hundred yards inland. And infrastructure will be made more resilient.
None of this is rocket science, and is the sort of thing we have been doing for the last century or so when sea levels have been rising.
None of this should need for the sort of blind panic recommended by the CCC, and would be much better handled by local communities in years to come, who will be in a much better position to assess the risks and plan solutions.
Shhhhh.
Don’t tell potential buyers of properties at Sandbanks, Dorset – where properties for sale today include one bargain basement house at only £16m.
Heck, £6m houses are two a penny.
Some even have wonderful views of Wytch Farm.
The carastrophic, unprecedented etc coastal erosion story was covered by Sky News this lunchtime. The female news lead at one point stated “let’s now talk to two communities affected by climate change”. So the story switched to Norfolk and north Yorkshire where “climate change” is supposedly eroding cliffs back to houses that, of course, shouldn’t have been built there in the first place.
It was apparently beyond the intelligence of Sky to point out that both locations feature cliffs largely comprised of mud (glacial deposits in the case of Norfolk and Yorkshire’s own Jurassic Coast in the other). It could have been worked out that muddy sedimentary rock + high tidal range + windy conditions to whip up the waves + rain to liquify the mud = slumping sediments + coastal erosion – processes that have continued unabated for thousands of years, as they in many parts of the British Isles and beyond. But no. Such GCSE physical geography is beyond Sky who have to invoke “climate change”.
My wife’s family used to have a holiday cottage at Barmston south of Bridlington. Not only does the cottage no longer exist neither, in any meaningful sense, does Barmston.
The Holderness coast is almost completely sand right the way back to Beverley, which is where the well-established and well-known erosion is likely to come to a halt. Spurn Point is where the sand ends up as the currents drive it south.
Since I know personally that this erosion has been happening for at the very least the last 55+ years I feel well qualified to say that, yet again, the media are feeding us prime tripe!
Interesting reading from BBC Countryfile magazine:
“Coastal flooding and erosion had destroyed most of the Norfolk village of Eccles by the turn of the 17th century. After a serious flood in 1604, the village was left with just fourteen houses, and flooding in 1895 destroyed the church tower.
The Norfolk coast is infamous for being one of the fastest eroding coastlines in Europe, because of a mix of soft clay and the battering waves of the North Sea. Along the coast, a number of villages have been abandoned or lost due to the power of the sea. Local records suggest the villages of Clare and Foulness succumbed to erosion in the 15th century, while other villages lost include Ness, Keswick, Newton, Shipden and Waxham Parva”.
RCP8.5…. report belongs in the bin.
Yes indeed; but unfortunately hysteria is very noisy, wherever you put it.
The BBC chose this morning to highlight on Radio4 the threat to Brighton in Sussex.
I haven’t been to Brighton in a quite while but if I recall correctly the beach has a fairly steep drop over the shingle to the sand. The scenario that Roger Harrabin outlined, a one metre rise by 2100, would push the dead and dying seaweed up another metre but still have room before it over-topped the shingle, let alone up onto the promenade and over the road to the properties on the other side.
Laughable.
The more they bring this into total disrepute, the better.
Or is it?
Are there wider implications for the whole of science and scientific endeavour?
Living near Brighton, you are quite right. However, what would be a blessing is for the Green Council to disappear under the waves.
“BBC Environment Analyst ” ! Analyst, surely ?
There is another possible explanation for this B.S. Namely that Harridan is looking to buy a house near the sea. In Australia it has been noticed that Tim Flannery, Kevin Rudd and Julia Bishop all purchased houses (or adjoining land to existing residence) despite hysterical claims about sea rise and “greatest moral challenge of our times”.
Perhaps Harridan is trying to lower prices?
I Stayed once at a pub in the Wirra,l on the banks of the Dee estuary. A once vibrant waterway now just a mere trickle of water down the middle. This criminal lack of dredging of a major waterway, if reflected across Britain, Europe and even the rest of the world MUST have a massive effect on sea levels.
On the Dee estuary, there is the ‘seaside’ village of Parkgate, with its row of Victorian seaside houses, some with a traditional ‘widow’s walk’ on the first floor. Looking over the sturdy sea wall, one can see – miles and miles of salt marsh!
Yes, that is where I stayed!
The railway along a cliff at Dawlish was also brought out by the 1pm BBC News today as another example of the damage likely to become more prevalent as a consequence of future sea level rise. Strange that, I don’t know another rail route along a UK cliff line. BBC has got good storm footage at Dawlish that they like to trot out at regular intervals.
“I don’t know another rail route along a UK cliff line.”
here’s a few starters –
North Wales railway at Llanfairfechan
+ Shakespeare Cliff near Dover
+ 1 in Northern Island
+ a few in Scotland
I stand corrected. What I should have done was comment that the threat to the Dawlish stretch of railway is from storm damage, not flooding from higher sea levels. Unless it is argued that a higher sea level will be accompanied by more intense and damaging storms, there should be no link.
RCP8.5 scenario
This nonsense is attributable (100% confidence) to the United Nation’s climate experts.
“Science is the belief in the ignorance of experts. ” Richard Feynman, 1966
Your CCC should be sued for wasting money and falsely inciting distress.
When I use any of the accepted rates and ask when we are going to see the multi-fold increase in rates, the responses are crickets or “scientists say and you can’t argue with scientists.” Even from a world known Ph. D. computer scientist. He just assumes that they are making honest, scientific appraisals.
“You can’t argue with scientists.”
That is because they have spent 99.9% of their organized time, since the age of two, simply believing whatever they are told – ‘chalk and talk’ from the teacher, ‘read and parrot’ from the youth, with an occasional catechism.
It is called ‘education.’ Which is a misnomer as ‘education’ literally means ‘drawing out what is already inside,’ not ‘filling up an empty vessel’ !
Mentally we are closer to chimps than gorillas. Chimps learn quickly because they believe one other, as a matter of course, Gorillas do not believe anything unless they can test it for themselves. This does not always work well for them. I watched a hilarious film of gorillas “learning” from each other to use a rock to smash up nuts. The initial method involved holding the nut down with a foot and swinging a rock down hard. The first gorilla had figured it out that it was better to put a rock on top of the nut than a foot. The other gorillas watched everything and then decided to smash nuts. Each one in turn first used his foot! Only personal experience of sore toes made them emulate the first gorilla.
Personally, I try to cultivate my inner gorilla, even at risk to my toes. The alternative is to be more prey to superstitions (ungrounded beliefs) and phobias (fears, often communicated from others) – which chimps do sometimes seem to suffer with.
If you want to see a sudden display of egoism, hate, and nastiness, intimate to a ‘scientist’ that he knows little or nothing of scientific facts from personal engagement and thought.
On the US side of the pond, alarmists are playing the same game, as shown by this graph of New York City.
A study by Union of Super Concerned Scientists is aimed to frighten property owners on both US coastlines.
The usual bo****** from the clowns in the CCC, and of course fake news from BBC.
There is a paradox here. The CCC is saying that the Government should spend lots of money to plan for future flooding in a scenario that assumes action to constrain greenhouse gas emissions is a complete and utter failure. So why is the British Government ploughing ahead with extremely costly policies to reduce UK emissions by 80% by 2050?
Argh now, how do you expect to get an intelligent answer from the UK Government when it is devoid of any intelligence or common sense?
“An increase of at least 1m is almost certain at some point in the future”
“The 1m of sea level rise predicted by the CCC would take 714 years at this rate”
714 years is “some point in the future”, so they are correct!
I’ll get my coat…
It was 1304, Edward Longshanks was in the process of ruining his countries finances by building castles in wales, Englands economy was dependent on cloth exports. Coins had real gold in them,and we paid tax with sticks the peasant only worked 53 days per year and the rest were his own. The country was half owned by 800 monasteries, I know therefore one monarch who would have put all the people complaining to work making poulders to stop the greensurgency and fix the problem,in one swoop regardless of costs. . .
In the projections of harm from climate change there is a common assumption. That of zero adaptation. In the case of sea level rise, there is no response at to those having more frequent flooding. Relative house prices or insurance costs are unaffected. People in at risk properties do not modify their homes.
I call it the “dumb economic actor assumption”.
Two other examples.
Last week there was a new paper projecting that beer prices could double by the end of the century due to global warming. Under RCP8.5, barley yields were projected to fall by 17%. For this to cause beer prices to double requires two sorts of dumb economic actors.
First are farmers not altering their crops despite changing conditions. The 17% projected yield fall was a global average with wide variations. One of the most extreme was along the US / Canada border. Around Calgary and Edmonton, projected yield falls were around the global average. A few hundred miles south in parts of Montana and North Dakota yields are set to double due to climate change. So why would the farmers in Montana and North Dakota not take advantage of the windfall profits from no effort and not expand output?
The second was projected cost of a 500ml bottle of beer. The economic model projected that the cost of the same bottle would rise by about £1.70 more in Ireland than in the UK. Nobody would have the bright idea of shipping more beer to Ireland to take advantage of the extra profits, thus making the prices more equal.
There was also the Environmental Audit Committee warning of 7,000 heat-related deaths by the 2050s. Most of the excess deaths were of the over 75s in hospitals and care homes. The dumb people who do not respond are the medical professionals and the those running care homes, who do not take extra measures in heatwaves, such as providing air conditioning, and making sure patients drink plenty of water.
Get elderly patients to drink plenty of water? Isn’t that against the “Liverpool Pathway”?
Is the “Liverpool Care Pathway for the Dying Patient” a recognized failed procedure for those identified as terminally ill? By definition, the poor person’s passage not be an excess death due to heat stress.
When Climate Change has been converted into a religion, you can expect to be preached to from the high altar that is the BBC. Harrabin is but one of many Apostles.
After I posted about this BBC story on the other thread, I realized the same story more or less is in the Daily Mail today (and just about every other paper I expect). This is clearly all closely coordinated propaganda from a single source.
The Daily Mail is now absolutely ridiculous, there are multiple stories every week quoting the 40,000 deaths a year statistic, in air pollution scare nonsense with a different twist every time.
I wonder how Ofcom will deal with this type of article and the like, when it reports on BBC bias. Particularly when the explanatory response such as this is never aired. To me it is mindless and irresponsible fear mongering; but to Ofcom it may seem otherwise. Who knows? The CO2 virus is ubiquitous.
I have formally complained the the BBC of their Climate Change Cult Bias, but they just said that the majority of scientist agree (I’m a scientist and don’t agree with the CAGW BS, but wasn’t asked in any survey; since when did surveys determine any science I could ask!).
The whole Science Politics And Money i.e. SPAM, maybe not be a SCAM (Science Corruption And Money) but just a way to ensure jobs in this area are protected.
Adrian Chiles on 5 Live gave this a 20 minute propaganda slot from 11am today. Spoke to two ‘experts’, both 100% onside for the impending catastrophe. Hardly asked a probing question throughout. Laughable hysteria yet again.
Is this going to be a whitewash?
Ofcom to review depth of analysis and impartiality of BBC news and current affairs output
So we should cut emissions and moderate SLR?
The future look of the chart will change as time passes and sea level changes very little. The red line will start at a later date and become more vertical. In 2090 it would have to be near vertical. Before then the end date will be moved to 2150 or 2175.
Oh no, a sea level hockey stick!
As you will know from my previous postings I am no scientist. Yet, it occurs to me that if the planet is going to warm to the level of prediction in ‘models’ will there not be more water in suspension? This, to my simplistic understanding, would suggest amelioration of sea levels rises and more clouds, a broader albedo effect. Everything seems to have balance.
“Everything seems to have [dynamic] balance.”
Or, at least, move off the balance point slowly in human time-scale.
O/T but have you seen this story?
Note the ASA reached an agreement so Smart Energy doesn’t have to put a notice on their website! | 0.995381 |
News
Iraqi Says Visit by Two Diplomats Backfired
by: | Visit article original @
Iraqi Says Visit by Two Diplomats Backfired
By Kirk Semple
The New York Times
Thursday 06 April 2006
Baghdad - A top adviser to Prime Minister Ibrahim al-Jaafari said 0aWednesday that the visit this week by Secretary of State Condoleezza Rice 0aand Foreign Secretary Jack Straw of Britain had backfired, prolonging a 0adeadlock over a new government and strengthening Mr. Jaafari's resolve to 0akeep his post.
"Pressure from outside is not helping to speed up any solution," 0asaid the 0aadviser, Haider al-Abadi. "All it's doing is hardening the position of 0apeople who are supporting Jaafari."
He added, "They shouldn't have come to Baghdad."
His comments were echoed by several political leaders on Wednesday, 0aincluding Kurds and Sunni Arabs.
Mr. Jaafari was nominated by the main Shiite political bloc in February 0ato 0abe prime minister in a new government. But the selection has faced fierce 0apublic resistance by a coalition of Sunni Arabs, Kurds, independents and 0asome Shiite leaders.
The visit by Ms. Rice and Mr. Straw appeared to grate even on politicians 0awho oppose Mr. Jaafari. "They complicated the thing, and now it's more 0adifficult to solve," said Mahmoud Osman, an independent member of the 0aKurdistan Alliance, speaking Wednesday about Ms. Rice and Mr. Straw. "They 0ashouldn't have come, and they shouldn't have interfered."
Also on Wednesday, an Iraqi cameraman employed by CBS and detained by 0aAmerican forces a year ago on suspicions of abetting the insurgency was 0aacquitted by a three-judge panel at the Central Criminal Court of Iraq in 0aBaghdad.
On April 5, 2005, the cameraman, Abdul Ameer Younis Hussein, was shot 0ain 0athe hip by an American sniper while he was filming the wreckage of a car 0abomb that had wounded several American soldiers in Mosul.
He was taken to a hospital where he was detained by the Americans. They 0asaid that he had tested positive for explosive residue and that images in 0ahis camera linked him to the insurgents.
After a year in detention, his ordeal ended Wednesday just as suddenly 0aas 0ait had begun when a prosecutor requested that the case be dismissed for 0alack of evidence.
He awaits final approval of his release by the American military, his 0alawyers said.
"The mystery is why this case got referred to the court in the 0afirst 0aplace," Scott Horton, a lawyer from New York who flew to Baghdad to help 0adefend Mr. Hussein, said after the ruling. "It's intimidation and the 0apotential use of lethal force against journalists."
Lt. Col. Keir-Kevin Curry, a spokesman for the military's detainee 0aoperations, said in an e-mail message that the Central Criminal Court was "still an evolving process."
"At times cases may take longer than a year," he said. "Our 0agoal is to 0ahave everyone appear before an investigative judge within six months of 0adetention."
In Baghdad, two car bombs detonated Wednesday afternoon within 20 minutes, 0akilling 3 people and wounding at least 16, an Interior Ministry official 0asaid.
Gunmen wearing the uniforms of Interior Ministry commandos and driving 0aministry vehicles opened fire on guards outside the Baghdad headquarters 0aof the Iraqna cellular phone company, wounding a guard and then abducting 0ahim.
An insurgent group posted a video on the Internet, claiming that it showed 0apeople dragging the burned body of an American pilot from an Apache 0ahelicopter that crashed southwest of Baghdad on Saturday.
The military said Wednesday that it could not confirm the authenticity 0aof 0athe video.
This is a moderated forum. It may take a little while for comments to go live. Be civil and on-topic, don't threaten or advocate violence, please keep it under 300 words. Thanks for participating. | 0.001059 |
VacancySeveral times in my life, I have felt that there was nothing past my skin. This is just an outline of a body with all the trappings of form and texture but inside, it's hollow.A spectre in human dressings.These bones, nothing more than an illusion, a finely crafted scheme. Inside my mouth and my eyes, continuations of this sick lie. A vacant space for questions of "why" to float aimlessly.
Pain and Its PurposeIn thin delirium we waste away.The nights becoming days melting into weeks.Time slides by.In our haunting laughter and choked back tears,we soldier on into a new agewhere thoughts are drained away in favor of primal lust.Reveling in our maladaptations,we come to know the meaning of pain and its purpose.The mediocre superstitions of our ancestors cease to comfort.We are the age of the aware.
Chilly in Many WaysI stretch against the back of the plastic chair. My bones creak and crack beneath my skin and I settle back to type. My fingers sit lightly against the keyboard, content for a moment.The giant windows to my left are streaked with rain. Lines distorting and twisting, beautifully running in and through each other. The green and yellow leaves flap and flutter in the wind. A chill runs through my spine, not out of fear or unpleasantness but for some reason, the college has decided that it's a great day to run the air conditioner. It's freezing outside. Tall decorative grass hangs listless, shuddering as if trying to hide from the cold.Alone on this side of the common area, a newswoman babbles incessantly behind me on the tv. Her voice is vaguely familiar but I'm uninterested in politics, which is what I'm fairly sure she's talking about. I don't care enough to check. I pull my sweatshirt sleeves down over my arms. Slightly warmer now.I watch cars slide by the parking lot, splashing pudd
The World Ins't Just Pink and BlueThe World Isn't Just Pink and BlueBy Cameron :3 A FTM boy :3The world isn’t just pink and blueThere are deeper thingsTo both them and youSo what if a boy uses products?Or a girl doesn’t wear make-up?You don’t have the rightTo kick up a fussJust because a boy loves pinkDoesn’t mean “there’s something wrong”Just because a girl can skateboardDoesn’t mean she sings a different songOur parents always tell us“Be different”“Be yourself”But as soon as we doThey make our lives a living hellJust because, we don’t fit in the boxThat your mind tells you is “normal”Doesn’t mean, you can’t love usThe world isn’t just pink and blueThere are deeper thingsTo both them and you
NOCHE CUANDO ERES MUJER. *** Se abre por insomniosen el mear de las estrellas,ve luna cuando su carne en soledad,enciende la televisión para llenar la nocheya abdiqué.-SophieCT, 4 déc 2013
ARE WE HUMAN . SPOKEN WORDAre we even human and are we really real.Or are we so confused that we forgot what it feels like to feel.I mean As the days go by I just grow Cold heartedA Senseless soul as I say my good byes to another sister who was martyredPaint a picture of a perfect world which is nothing short of distortedNothing short of a disguise To try to hide all the lies from behind of all the children dyingAnd all the lost livesAs a little girl criesI'm alive I'm alive I can't believe I'm breathingSee we're alive too sweetie but our hearts aren't really beatingSee every time I hear you scream on the screen I'ma quickly get to leavingHit the X please, I mean I really need to sleep this eveningI mean how can I eat while I watch her bleedingI mean how can I turn the heat on my heaters while I watch her freezingSo if I hear her screaming one more time I'ma hold my breath and close my eyesand just pretend to be dumb, deaf and blind And hope to death she doesn't die.Failing to realise that our s
Misery Comes With LoveI love how you think I careand with every memory you will tear.You should know by now that I am no angeland your every heartbeat will be ruined by danger.Your assumptions are nothing but my liesAs I stand here underneath a crimson sky.Don’t say you care with only a stare,I will bring you pain more than you can bear.I’m immune to your stymieand I have no wish for you to free me.You may think I’m lyingbut who’s the one crying?What you may think is me doubting is only my paradoxso don’t fill me with your useless flout.You don’t know me so don’t say you do;I would love to watch you come undone.To leave you behind is my ultimate prizeso stop saying arbitrary things that will kill you,Because I’m only waiting on the cueWhile your being blind and unwise.I love how you think I careand with every memory you will tear.You should know by now that I am no angeland your every heartbeat will be ruined by danger.
A Different EndingPut on the makeup and mask the painin the end you knew there was nothing to gainso they left you to take all the blamewith their vain smiles they wrapped you in chains.Through these twisting daysyou keep running down a twisting mazebut your tired of living life on delayso you finally are able to look through this haze.So run and run until you find freedomand never look back at your past demonsnever caring if what you're doing is legal,run and run until you find your freedom.You're never going down so never give upwhile looking at that future close up.Never let those lies eat you and corruptrun down those hills never letting anyone interrupt.
PLEASE ''READ'' THE MEANING OF LIFE .What are we doing here and where are we going to goAnd don't think too often just do exactly as your toldi didn't even pick a shapei didn't get hypnotisedi went and kicked in the gate
SorrowI've only ever been inside of churches for funerals.With women at my sidesThat are like windows on the night of broken glass.Women burying lovers.Women burying fathers.Women burying husbands.Women bearing their souls.Drowning in the weight of their sobs.Stories that will never be told againweighting heavy on their hearts;slow, beating in their chests.Breaking under their breast bones., | 0.027251 |
AUSTRALIAN PRAYER NETWORK NEWSLETTER
AUSTRALIAN MUSLIM PARTY TO CONTEST FEDERAL AND STATE ELECTIONS
The newly formed Australian Muslim Party (AMP) will never support military action in a Muslim-majority country, founder Diaa Mohamed says. Australia’s first Islamic faith political party intends to field Senate candidates in all states and territories at next year’s federal election and also contest upper house seats at state level. Founder Diaa Mohamed defended the timing of the announcement just weeks after the Paris terrorist atrocities, insisting there had never been a more critical time for the Muslim community to have a political voice in Australia. As a devout Muslim, he said he would never condone the killing of innocents as seen on the streets of Paris and Beirut but” he said.
Mr Mohamed, a 34 year-old businessman from western Sydney, founded a group called “MyPeace” aimed at improving relations between Muslims and mainstream Australia. He was also behind billboards erected in Sydney in 2011 that claimed “Jesus: a prophet of Islam”. An unmarried father of a 9 year-old son, he formerly worshipped at Lakemba Mosque but now attends the Parramatta Mosque. He said the establishment of the AMP was in part a reaction to the six anti-Islamic parties intending to stand for election, including the Australian Liberty Alliance, launched recently by controversial Dutch politician Geert Wilders, Rise Up and Nick Folkes’ Party for Freedom. About 20 Party for Freedom supporters protested outside the Parramatta Mosque recently after the murder of NSW Police accountant Curtis Cheng. website would go live shortly.ifi, a respected voice on moderate Islam, said he would encourage young Muslims to get involved with established parties like Labor Party, the Liberal Party and the Greens but understood the compulsion to directly organise on behalf of Muslims. “We live in a democratic society and people are entitled to form anti-Muslim parties just as people are entitled to form the Muslim Party,” he said. Labor’s Ed Husic was the first Muslim MP elected to the federal parliament in 2010. Mr Mohamed said some Muslim commentators used regularly by the media showed too much “appeasement” of the mainstream community.
He described as “stupid” comments by Tasmanian senator Jacqui Lambie in support of banning sharia law, halal certification and the wearing of the burqa – although face coverings should not be allowed in police matters, banks and driver’s licence issues, he said. He said he was “living” sharia by not drinking, not eating pork and trying to pray five times a day but said fears about the imposition of any official sharia in a country with a Muslim population of 1.6 per cent was a “non-issue”. “People should be free to wear as little as they want but also free to wear as much as they want,” he said. The party supports Australia accepting 12,000 Syrian refugees as the “most humane thing to do” in response to the crisis in that country.
Source: Compiled by APN from media reports
KINGS CROSS A BASKET-CASE AS COMMERICAL PROPERTIES LOSE VALUE
Kings Cross will never again function as Sydney’s late-night entertainment precinct, property owners say, with residential developers snapping up promising land, while less desirable sites decline in value. In the wake of the gentrification of the suburb, licensing restrictions, and lock out laws, the Valuer General has in recent weeks offered to reduce the land value of at least eight commercial properties in the area by up to $1.25 million, in response to land tax objections lodged by owners. The Land and Property office has agreed to reductions of between 5 and 20 per cent on the properties, which include the Kings Cross Hotel, Carlisle House and Iguana Bar. Separately, another property, the heritage-listed Minton House, was earlier in the year devalued by almost 35 per cent, from $10 million to $6.58 million.
Owners are cashing in on offers from residential developers. Andrew Lazarus, owner of the now closed Soho nightclub, confirmed he was exchanging contracts to sell the site to an apartment developer. “The Cross will never be an entertainment area again,” he told a parliamentary inquiry, unless governments deliberately intervene to stop its gentrification. Liberal Democrat Senator David Leyonhjelm is conducting his own inquiry into measures that limit personal choice “for the individual’s own good”. The inquiry is.
Property valuer Phil Rennie lodged land tax objections on behalf of 12 property owners in May, due to a collapse in commercial rents. Of the eight that were agreed by the Valuer General (VG), the average reduction in land value was about 14 per cent. Mr Rennie said the owners would consider the VG’s offers within the next 60 days. Valuer General Simon Gilkes would not comment on the individual cases but said all objections were independently assessed. Commercial properties conducive to residential redevelopment, have been selling fast in the old red light district. Earlier this year, Chinese developer Greenland Group bought the Crest Hotel site, along with a pub in Parramatta, for $170 million. as to what’s taking place,” he said. “Today, that strip is a basket-case.”.
Source: Compiled by APN from media reports
WOOLWORTHS URGED TO STOP PROFITING FROM POKIES
The Australian Christian Lobby (ACL) and the Alliance for Gambling Reform are urging the retail chain Woolworths to exit their $1 billion-plus pokie empire. ACL managing director Lyle Shelton has urged shareholders to pressure the company directors to exit the family-damaging gambling industry. “It is time for Woolworths to exit the industry or support reforms proposed by the Australian Christian Lobby and the Alliance for Gambling Reform,” Mr Shelton said. “Woolworths, which owns 96% of the gambling business AHL Group, is making an estimated $1.2 billion dollars a year off the backs of the poorest Australians,” Mr Shelton said. “The average Australian would be shocked to realise that the grocer who puts food on the table for some families, takes it off others through their gambling profits.”
He said Woolworths owns or operates approximately 12,000 electronic gaming machines through the ALH Group’s 323 poker-machine pubs, in which it is the major shareholder. “If Woolworths were serious about their ‘family friendly’ direction, they should not involve themselves in the socially harmful pokie business.” Mr Shelton said. “Woolworths should use its powerful position to lead by example and minimise the harm posed by poker machines.” “The gambling machines are designed to use deceptive tricks to entice people to continue spending money on them. The damage that this is causing to families and in communities is now well documented.” Mr Shelton said the Australian Christian Lobby was open to working with Woolworths if it is serious about introducing reforms.
Source: Compiled by APN from media reports
Have you visited our Web site? Australian Prayer Network | 0.222012 |
FearFighter™
FearFighter™ from €119
Overcome panic, anxiety and phobia.
What is FearFighter?™
FearFighter™ is a structured, easy to use, Cognitive Behavioural Therapy (CBT) programme.
Is it right for me?
FearFighter™ is designed to tackle your thoughts and challenge avoidance behaviours that characterise panic and phobia.
How do I know it works?
You access the material on-line via your PC, tablet or phone. Working through structured sessions, you can print out worksheets for activities, monitor your own progress, and receive emails with further tips at the end of each step.
How long does it take?
There are 9 sessions in all, 30 minutes each, and we recommend you do one session a week. However, as the material is available on-line, 24/7, you can work through it at your own pace, in the privacy of your own home or wherever you are connected to the internet.
How can I be sure that it is effective?
FearFighter™ has undergone extensive testing via randomised controlled trials, including one involving 700 patients from real-world clinics. It has received an endorsement from the National Institute of Clinical Excellence (N.I.C.E.) for being clinically effective, as well as cost effective, for panic and phobia. In the N.IC.E. Final Appraisal Determination (FAD), FearFighter™ was recommended for the management of both panic and phobia. The programme is being adopted by other countries as a recognised standard form of treatment.
Features
The only product endorsed by N.I.C.E.* as an anxiety treatment.
Evidence based treatment, helping thousands to overcome their fears.
9 x 30 minutes sessions, to help challenge behaviours that characterise panic and phobia.
Activities scheduled between certain sessions to build on your progress.
Proven to be successful in a series of independent trials.
Available 24/7, so you can work at your own pace.
*National Institute for Health and Care Excellence, UK. | 0.905324 |
1.7 Oz EDP Perfume Classic Spray by Prada for $51.99
Retail Price $75.00
Save 31% , Musk, Sandalwood Oil.
@wootstheone: No offence, but when I first saw the title, my train of thought went from Prada to Versace, to Gucci, to spammers... :D It's just the recent string of spam-happy crazies that did it, but I had to smile...
Really not a bad deal here though.
@arosiriak: Thanks, none taken. I don't like the spammers either.
2 Comments add a comment
Sort By: | 0.001009 |
\section{The path space} \label{sec:pathspace}
As mentioned in Sections \ref{sec:inductive_intro} and
\ref{sec:groupoids}, the type theoretic \emph{identity type} is
interpreted homotopy theoretically as the path space. The path space,
and the corresponding type in Coq, is so important that we will now
carefully describe several basic constructions involving it in the
setting of Coq.
\begin{figure}[H]
\centering
\begin{tikzpicture}[color=mydark, fill=mylight, line width=1pt]
\draw[fill=mylight] plot [smooth cycle,tension=.75] coordinates
{(-2.5,0) (-2,1.5) (0,1.25) (1.75,1.5) (2.75,0) (1.75,-1) (-1,-1)};
\draw[fill=white] plot [smooth cycle,tension=.75] coordinates
{(0,.2) (.5,.45) (1,.2) (.5,-.05) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=4pt,minimum size=.75mm] (aa)
at (.2,-.4) {};
\node at (.15,-.6) {$\scriptstyle a$};
\node[circle,fill=mydark,inner sep=0pt,outer sep=4pt,minimum size=.75mm] (bb)
at (-1,.75) {};
\node at (-1.2,.9) {$\scriptstyle b$};
\draw[color=mydark,->-=.5,line width=.5pt] plot [smooth,tension=.75]
coordinates { (aa) (1.2,.2) (.4,.8) (bb) };
\node[font=\tiny] at (1.3,.2) {$k$};
\setcounter{myi}{0}
\foreach \i in {1,...,9}
{
\pgfmathsetcounter{myi}{\themyi+9.9}
\setcounter{myi}{\themyi}
\draw [mydark,line width=.1pt] plot [smooth,tension=.5]
coordinates { (aa) (0-.1*\i,0+.01*\i) (bb) };
}
\draw [mydark,line width=.5pt,->-=.7] plot [smooth,tension=.5]
coordinates { (aa) (-1,.1) (bb) };
\draw[mydark,line width=.5pt,->-=.4] plot [smooth,tension=.5] coordinates { (aa) (-.1,.01) (bb) };
\node[circle,inner sep=.2mm,fill=mylight,fill opacity=.4,text opacity=1] at (-.7,.2) {$\scriptstyle p$};
\node[font=\tiny] at (-.25,.4) {$f$};
\node[font=\tiny] at (-.8,-.2) {$g$};
\draw[dotted,line width=.6pt] (-1,3) to (bb);
\draw[fill=mylight!50,line width=.8pt] plot [smooth cycle,tension=-.15] coordinates
{(-2.2,2.5) (-2.2,4.5) (.2,4.5) (.2,2.5) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (ee) at (-.5,3.4) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (ff) at (-1.2,3.9) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (gg) at (-1.4,3.1) {};
\draw[mydark,line width=.4pt,->-=.5,inner sep=0pt, outer sep=0pt] (-1.2,3.9) to (-1.4,3.1);
\node[font=\tiny] at (-.38,3.5) {$\scriptstyle k$};
\node[font=\tiny] at (-1.05,4) {$\scriptstyle f$};
\node[font=\tiny] at (-1.5,3) {$\scriptstyle g$};
\node[font=\tiny] at (-1.45,3.5) {$\scriptstyle p$};
\node[font=\tiny] at (.8,4.3) {$\texttt{paths }a\;b$};
\node[font=\footnotesize] at (-2.55,1.45) {$A$};
\end{tikzpicture} \label{fig:paths}
\caption{The path space fibration $\texttt{paths }a$ with the fiber
over a point $b$. Here $p$ is a (path) homotopy from $f$ to $g$.}
\end{figure}
In Coq, the path space \verb|paths| is defined as follows.
\begin{center}
\begin{coqcode}
Notation paths := identity.
\end{coqcode}
\end{center}
Here \verb|identity|, like \verb|nat|, is a built-in
inductive type in the Coq system. We can see how it is defined
inductively using \verb|Print| to find
\begin{center}
\begin{coqcode}
Inductive identity (A : Type) (a : A) : A -> Type :=
identity_refl : identity a a.
\end{coqcode}
\end{center}
That is, for each \verb|a : A|, \verb|identity a| is the
fibration freely generated by a term \verb|identity_refl a| in
the fiber over \verb|a|.
We add the following line in order to introduce a
slightly shorter notation for the terms \verb|identity_refl|:
\begin{center}
\begin{coqcode}
Notation idpath := identity_refl.
\end{coqcode}
\end{center}
That is, for \verb|a : A|, \verb|idpath a : paths a a| is
the \emph{identity path} based at \verb|a|.
Recall that a \emph{path} in a space $A$ is a continuous function
$\varphi:I\to A$ where $I=[0,1]$ is the unit interval. We say that
$\varphi$ \emph{is a path from a point $a$ of $A$ to a point $b$ of $A$} when
$\varphi(0)=a$ and $\varphi(1)=b$. Then, the \emph{path space} $A^{I}$ is
the space of paths in $A$ and it comes equipped with two maps
$\partial_{0},\partial_{1}:A^{I}\to A$ given by
$\partial_{i}(\varphi):=\varphi(i)$ for $i=0,1$. The induced map
$\langle\partial_{0},\partial_{1}\rangle: A^{I}\to A\times A$ is a
fibration which gives a factorization of the diagonal $\Delta:A\to
A\times A$ as
\begin{center}
\begin{tikzpicture}[auto]
\node (UL) at (0,1.25) {$A$};
\node (UR) at (2.5,1.25) {$A^{I}$};
\node (B) at (1.25,0) {$A\times A$};
\draw[->] (UL) to (UR);
\draw[->,bend right=10pt] (UL) to node[mylabel,swap] {$\Delta$} (B);
\draw[->,bend left=10pt] (UR) to (B);
\end{tikzpicture}
\end{center}
where the first map $A\to A^{I}$ is a weak equivalence and the second is the
fibration mentioned above. Here the first map $A\to A^{I}$ sends a
point $a$ to the constant loop based at $a$. (That is, this first map
is precisely \verb|idpath|.) One of the many important contributions of Quillen in
\cite{Quillen:1967uz} was to demonstrate that it is in fact possible
to do homotopy theory without the unit interval provided that one has
the structure of path spaces, weak equivalences, fibrations, and a few
other ingredients. This is part of the reason that, even though type
theory does not (without adding higher-inductive types or something
similar) provide us with a unit interval, it is still possible to work
with homotopy theoretic structures type theoretically.
\subsection{Groupoid structure of the path space}
We will now describe the groupoid structure which the path space
endows on each type. These constructions are well-known and their
connection with higher-dimensional groupoids was first noticed by
Hofmann and Streicher \cite{Hofmann:1998ty}.
First, given a path $f$ from $a$ to $b$ in $A$ we would like to be
able to reverse this path to obtain a path from $b$ to $a$. For
topological spaces this is easy because a path $\varphi:I\to A$ gives
rise to an inverse path $\varphi'$ given by
$\varphi'(t):=\varphi(1-t)$, for $0\leq t\leq 1$.
\begin{center}
\begin{coqcode}
Definition pathsinv { A : UU } { a b : A } ( f : paths a b )
: paths b a.
Proof.
destruct f. apply idpath.
Defined.
\end{coqcode}
\end{center}
Here recall that \verb|destruct| allows us to argue by cases
about terms of inductive types. Here \verb|f| is of type
\verb|paths a b|, which is inductive, and therefore this tactic
applies. In this case, there is only one case to consider:
\verb|f| is really the identity path
\verb|idpath a : paths a a|. Because the inverse of the identity is the identity we then use
\verb|apply idpath| to complete the proof. (Note that we could
also have used \verb|exact ( idpath a )| instead of
\verb|apply idpath| here to obtain the same term.)
Next, given a path $f$ as above together with another path $g$ from
$b$ to $c$, we would like to define the composite path from $a$ to $c$
obtained by first traveling along $f$ and then traveling along $g$.
This operation of \emph{path composition} is defined as follows:
\begin{center}
\begin{coqcode}
Definition pathscomp { A : UU } { a b c : A } ( f : paths a b ) ( g : paths b c ) : paths a c.
Proof.
destruct f. assumption.
Defined.
\end{coqcode}
\end{center}
Once again, the proof begins with \verb|destruct f| which
effectively collapses \verb|f| to a constant loop. In
particular, the result of this is to change the ambient hypotheses so
that \verb|g| is now of type \verb|paths a c| (see Figure
\ref{figure:pathscomp}). At this stage, the goal matches the type of
\verb|g| and we use \verb|assumption| to let the Coq system
choose \verb|g| as the result of composing \verb|g| with the
identity path.
\begin{figure}[H]
\begin{tikzpicture}
\node[smallcoqbox] (zero) at (0,0) {
\begin{minipage}{4.25cm}
\footnotesize
\noindent\verb|A : UU|
\noindent\verb|a : A|
\noindent\verb|b : A|
\noindent\verb|c : A|
\noindent\verb|f : paths a b|
\noindent\verb|g : paths b c|
\noindent\verb|============================|
\noindent\verb| paths a c|
\end{minipage}
};
\node[anchor=north east, inner sep=2pt] (titlezero) at
(zero.north east) {\emph{Start of proof}};
\node[smallcoqbox] (one) at (5.25,0) {
\begin{minipage}{4.25cm}
\footnotesize
\vphantom{\texttt{b : A}}
\vphantom{\texttt{f : paths a b}}
\noindent\verb|A : UU|
\noindent\verb|a : A|
\noindent\verb|c : A|
\noindent\verb|g : paths a c|
\noindent\verb|============================|
\noindent\verb| paths a c|
\end{minipage}
};
\node[anchor=north east, inner sep=2pt] (titleone) at
(one.north east) {\emph{after} \verb|destruct f.|};
\end{tikzpicture}
\caption{Coq output during the definition of path composition.}
\label{figure:pathscomp}
\end{figure}
One immediate consequence of this definition is that the unit law
$f\circ 1_{a}=f$ for $f:a\to b$ holds \emph{on the nose} in the sense
that these two terms (\verb|pathscomp ( idpath a ) f| and
\verb|f|) are identical in the strong $=$ sense. On the other
hand, the unit law $1_{b}\circ f=f$ does not hold on the nose.
Instead, it only holds up to the existence of a higher-dimensional
path as described in the following Lemma:
\begin{center}
\begin{coqcode}
Lemma isrunitalpathscomp { A : UU } { a b : A } ( f : paths a b ) : paths ( pathscomp f ( idpath b ) ) f.
Proof.
destruct f. apply idpath.
Defined.
\end{coqcode}
\end{center}
The proof of this requires little comment (when $f$ becomes itself an
identity path, the composite becomes, by the left-unit law mentioned
above, an identity path). The one thing to note here is that here
instead of \verb|Definition| we have written \verb|Lemma|.
Although there are some technical differences between these two ways
of defining terms they are for us entirely interchangeable and
therefore we use the appellation ``Lemma'' in keeping with the
traditional mathematical distinction between definitions and lemmas.
That facts that, up to the existence of higher-dimensional paths,
composition of paths is associative and that the inverses given
by \verb|pathsinv| are inverses for composition are recorded as
the terms \verb|isassocpathscomp|, \verb|islinvpathsinv| and
\verb|isrinvpathsinv|. However, the descriptions of
these terms are omitted in light of the fact that they all
follow the same pattern as the proof of \verb|isrunitalpathscomp|.
\subsection{The functorial action of a continuous map on a path}
Classically, given a continuous map $f:A\to B$ and a path
$\varphi:I\to A$ in $A$, we obtain a corresponding path in $B$ by
composition of continuous functions. Thinking of spaces as
$\infty$-groupoids, this operation of going from the path $\varphi$ in
$A$ to the path $f\circ\varphi$ in $B$ is the functorial action of $f$
on $1$-cells of the $\infty$-groupoid $A$. In Coq, this action of
transporting a path in $A$ to a path in $B$ along a continuous map is
given as follows:
\begin{center}
\begin{coqcode}
Definition maponpaths { A B : UU } ( f : A -> B ) { a a' : A } ( p : paths a a' ) : paths ( f a ) ( f a' ).
Proof.
destruct p. apply idpath.
Defined.
\end{coqcode}
\end{center}
The proof again follows the familiar pattern: when the path $p$ is the
identity path on $a$, the result of applying $f$ should be the
identity path on $f(a)$. We introduce the following notation for
\verb|maponpaths|:
\begin{center}
\begin{coqcode}
Notation "f ` p" := ( maponpaths f p ) (at level 30 ).
\end{coqcode}
\end{center}
This is an example of a general mechanism in Coq for defining
notations, but discussion of this mechanism is outside of the scope of
this article (the crucial point here is that the value 30 tells how
tightly the operation $`$ should bind).
\begin{figure}[H]
\centering
\begin{tikzpicture}
\draw[fill=mylight] plot [smooth cycle,tension=.75] coordinates
{(-4,0) (-3.5,1.5) (-2,1) (-.5,1.5) (0,0) (-2,-1) };
\draw[fill=mylight] plot [smooth cycle,tension=1] coordinates
{(2.5,0) (4,1) (5.5,0) (4,-1) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=4pt,minimum
size=.75mm] (aa) at (-3.5,.5) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=4pt,minimum
size=.75mm] (aaprime) at (-.5,.5) {};
\draw[color=mydark,->-=.5,line width=.5pt] plot [smooth,tension=.75]
coordinates { (aa) (-2,-.25) (aaprime) };
\node at (-3.6,.65) {\footnotesize $a$};
\node at (-.35,.7) {\footnotesize $a'$};
\node at (-2.25,-.35) {\footnotesize $p$};
\node[circle,fill=mydark,inner sep=0pt,outer sep=0pt,minimum
size=.75mm] (faa) at (3.25,.25) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=0pt,minimum
size=.75mm] (faaprime) at (4.75,-.25) {};
\draw[color=mydark,->-=.5,line width=.5pt] (faa) to
node[mylabel,auto,swap] {$f`p$} (faaprime);
\draw[->,line width=.75,bend left] (.25,.75) to node[auto,mylabel]
{$f$} (2.25,.75);
\node at (3,.4) {\footnotesize $f a$};
\node at (5,-.05) {\footnotesize $f a'$};
\node at (-4.25,1.35) {$A$};
\node at (5.25,1.25) {$B$};
\end{tikzpicture}
\caption{Representation of \texttt{maponpaths}.}
\end{figure}
We leave it as an exercise for the reader to verify that the
operation \verb|maponpaths| respects identity paths, as well as
composition and inverses of paths.
\section{Transport}\label{sec:transport}
Given a fibration $\pi:E\to B$ together with a path $f$ from $b$ to $b'$
in the base $B$, there is a continuous function $f_{!}:E_{b}\to
E_{b'}$ from the fiber $E_{b}$ of $\pi$ over $b$ to the fiber $E_{b'}$
over $b'$. This operation $f_{!}$ of \emph{forward transport} along a
path is described in Coq as follows:
\begin{center}
\begin{coqcode}
Definition transportf { B : UU } ( E : B -> UU ) { b b' : B }
( f : paths b b' ) : E b -> E b'.
Proof.
intros e. destruct f. assumption.
Defined.
\end{coqcode}
\end{center}
\begin{figure}[H]
\centering
\begin{tikzpicture}[color=mydark, fill=mylight, line width=1pt,scale=.75]
\draw[fill=mylight] plot [smooth cycle,tension=.75] coordinates
{(-2.5,0) (-2,1.5) (0,1.25) (1.75,1.5) (2.75,0) (1.75,-1) (-.5,0)};
\draw[color=mydark,->-=.5,line width=.7pt] plot [smooth,tension=.5]
coordinates { (-1.75,.5) (-1,.8)(1,0) (1.5,.25) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=4pt,minimum size=.75mm] (aa)
at (-1.75,.5) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum size=.75mm] (bb)
at (1.5,.25) {};
\draw[dotted,line width=.6pt] (-1.75,3) to (aa);
\draw[dotted,line width=.6pt] (1.5,3) to (bb);
\draw[fill=mylight!50,line width=.8pt] plot [smooth cycle,tension=-.15] coordinates
{(-2.5,2.5) (-2.5,4.5) (-1,4.5) (-1,2.5) };
\draw[fill=mylight!50,line width=.8pt] plot [smooth cycle,tension=-.15] coordinates
{(.75,2) (.75,4) (2.25,4) (2.25,2) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (ee)
at (-2,3.8) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (eef) at (1.5,3.5) {};
\draw[->,dotted,line width=.6pt] (ee) to (eef);
\node[font=\tiny] at (-2.15,3.9) {$\scriptstyle e$};
\node[font=\tiny] at (1.75,3.7) {$\scriptstyle f_{!}(\!e)$};
\node at (-1.95,.6) {$\scriptstyle b$};
\node at (1.7,.4) {$\scriptstyle b'$};
\node[font=\tiny] at (.25,.5) {$f$};
\node[font=\tiny] at (-2.95,4.5) {$E(b)$};
\node[font=\tiny] at (2.75,4) {$E(b')$};
\node[font=\footnotesize] at (-2.55,1.45) {$B$};
\end{tikzpicture} \label{fig:transportf}
\caption{Forward transport.}
\end{figure}
For a path $f$ as above, there is a corresponding operation
$f^{*}:E_{b'}\to E_{b}$ of \emph{backward transport} and it turns out
that $f_{!}$ and $f^{*}$ constitute a homotopy equivalence.
We will turn to briefly discuss homotopy and homotopy equivalence in
the setting of Coq before returning to forward and backward transport.
\subsection{Homotopy and homotopy equivalence}
Recall that for continuous functions $f,g:A\to B$, \emph{a homotopy from $f$
to $g$} is given by a continuous map $h:A\to B^{I}$ such that
\begin{center}
\begin{tikzpicture}[auto]
\node (UL) at (0,1.25) {$A$};
\node (UR) at (2.5,1.25) {$B^{I}$};
\node (B) at (1.25,0) {$B\times B$};
\draw[->] (UL) to node[mylabel] {$h$} (UR);
\draw[->,bend right=10pt] (UL) to node[mylabel,swap] {$\langle f,g\rangle$} (B);
\draw[->,bend left=10pt] (UR) to (B);
\end{tikzpicture}
\end{center}
commutes.
In Coq, the type of homotopies between functions $f,g: A\to B$ is given by
\begin{center}
\begin{coqcode}
Definition homot { A B : UU } ( f g : A -> B ) := forall x :A, paths ( f x ) ( g x ).
\end{coqcode}
\end{center}
Here we encounter a new ingredient in Coq: the universal quantifier
\verb|forall|. From the homotopical point of view, this
operation takes a fibration \verb|E : B -> UU| and gives back the
space \verb|forall x : B, E x| of all continuous sections of the
fibration. That is, we should think of a point $s$ of this type as
corresponding to a continuous section
\begin{align*}
\begin{tikzpicture}[auto]
\node (UL) at (0,1.25) {$B$};
\node (UR) at (2.5,1.25) {$E$};
\node (B) at (1.25,0) {$B.$};
\draw[->] (UL) to node[mylabel] {$s$} (UR);
\draw[->,bend right=10pt] (UL) to node[mylabel,swap] {$1_{B}$} (B);
\draw[->,bend left=10pt] (UR) to (B);
\end{tikzpicture}
\end{align*}
One particular consequence of this is that if we are given a term
\begin{center}
\begin{coqcode}
s : ( forall x : B, E x )
\end{coqcode}
\end{center}
and another term \verb|b : B|, then the term \verb|s| can be
\emph{applied} to the term \verb|b : B| to obtain a term of type
\verb|E b|. The result of applying \verb|s| to
\verb|b| is denoted by
\begin{center}
\begin{coqcode}
s b : E b.
\end{coqcode}
\end{center}
We have more below to say about \verb|forall|.
Now, a map $f:A\to B$ is a \emph{homotopy equivalence} when there exists a
map $f':B\to A$ together with homotopies from $f'\circ f$ to $1_{A}$
and from $f\circ f'$ to $1_{B}$. In this case, we say that $f'$ is a
\emph{homotopy inverse} of $f$. Two spaces $A$ and $B$ are said to
have the same \emph{homotopy type} when there exists a homotopy
equivalence $f:A\to B$.
In Coq, we define the type of proofs that a map \verb|f : A -> B|
is a homotopy equivalence as follows:
\begin{center}
\begin{coqcode}
Definition isheq { A B : UU } ( f : A -> B ) := total (fun f' : B -> A => dirprod (homot (funcomp f' f) (idfun _)) (homot (funcomp f f') (idfun _)) ).
\end{coqcode}
\end{center}
Here it is worth pausing for a moment to consider the meaning of the
type \verb|isheq|. Intuitively, \verb|isheq f| is the type
consisting of the data which one must provide in order to prove that
\verb|f| is a homotopy equivalence. That is, a term of type
\verb|isheq f| consists of:
\begin{itemize}
\item a continuous map \verb|f' : B -> A|;
\item a homotopy from \verb|funcomp f' f| to the identity on \verb|B|;
\item a homotopy from \verb|funcomp f f'| to the identity on
\verb|A|.
\end{itemize}
Indeed, by the definitions of \verb|total| and
\verb|dirprod| the terms of \verb|isheq f| can be regarded
as a tuple of such data.
\subsection{Forward and backward transport}
It turns out that, as mentioned above, the backward transport map
$f^{*}:E_{b'}\to E_{b}$ is a homotopy inverse of forward transport
$f_{!}$. Denote by \verb|transportb| the backward transport
term. It is often convenient to break the proofs of larger facts up
into smaller lemmas and we will do just this in order to show that
\verb|transportf E f| is a homotopy equivalence. In particular,
we begin by proving that $f_{!}\circ f^{*}$ is homotopic to the
identity $1_{E_{b'}}$:
\begin{center}
\begin{coqcode}
Lemma backandforth { B : UU } { E : B -> UU } { b b' : B } ( f : paths b b' ) ( e : E b' ) : homot ( funcomp ( transportb E f ) ( transportf E f ) ) ( idfun _ ).
Proof.
intros x. destruct f. apply idpath.
Defined.
\end{coqcode}
\end{center}
Next, we prove that $f^{*}\circ f_{!}$ is homotopic to the identity
$1_{E_{b}}$ as \verb|forthandback| (we omit the proof because it
is identical to the proof of \verb|backandforth|):
\begin{center}
\begin{coqcode}
Lemma forthandback { B : UU } { E : B -> UU } { b b' : B } ( f : paths b b' ) ( e : E b' ) : homot ( funcomp ( transportf E f ) ( transportb E f ) ) ( idfun _ ).
\end{coqcode}
\end{center}
Using these lemmas we can finally prove that
\verb|transportf E f| is a homotopy equivalence.
\begin{center}
\begin{coqcode}
Lemma isheqtransportf { B : UU } ( E : B -> UU ) { b b' : B } ( f : paths b b' ) : isheq ( transportf E f ).
Proof.
split with ( transportb E f ). split.
apply backandforth. apply forthandback.
Defined.
\end{coqcode}
\end{center}
\begin{figure}[H]
\begin{tikzpicture}
\node[smallcoqbox] (zero) at (0,0) {
\begin{minipage}{5.2cm}
\footnotesize
\noindent\verb|1 subgoals, subgoal 1|
~
\noindent\verb|B : UU|
\noindent\verb|E : B -> UU|
\noindent\verb|b : B|
\noindent\verb|b' : B|
\noindent\verb|f : paths b b'|
\noindent\verb|============================|
\noindent\verb| isheq (transportf E f)|
\vphantom{\texttt{(transportf E f )) ( idfun ( E b' ) )}}
\end{minipage}
};
\node[anchor=north east, inner sep=2pt] (titlezero) at
(zero.north east) {\footnotesize\emph{Start of proof}};
\node[smallcoqbox] (one) at (6,0) {
\begin{minipage}{5.2cm}
\footnotesize
\noindent\verb|2 subgoals, subgoal 1|
~
\noindent\verb|B : UU|
\noindent\verb|E : B -> UU|
\noindent\verb|b : B|
\noindent\verb|b' : B|
\noindent\verb|f : paths b b'|
\noindent\verb|============================|
\noindent\verb| homot (funcomp (transportb E f)|
\noindent\verb| (transportf E f)) (idfun (E b'))|
\end{minipage}
};
\node[anchor=north east, inner sep=2pt] (titleone) at
(one.north east) {\footnotesize\emph{after}
\verb|split with; split|.};
\node[smallcoqbox] (two) at (0,-5.15) {
\begin{minipage}{5.2cm}
\footnotesize
\noindent\verb|1 subgoals, subgoal 1|
~
\noindent\verb|B : UU|
\noindent\verb|E : B -> UU|
\noindent\verb|b : B|
\noindent\verb|b' : B|
\noindent\verb|f : paths b b'|
\noindent\verb|============================|
\noindent\verb| homot (funcomp (transportf E f)|
\noindent\verb| (transportb E f)) (idfun (E b))|
\end{minipage}
};
\node[anchor=north east, inner sep=2pt] (titletwo) at
(two.north east) {\footnotesize\emph{after} \verb|apply backandforth|};
\end{tikzpicture}
\caption{Coq output during the proof that forward transport is a
homotopy equivalence.}
\label{figure:isheqtransportf}
\end{figure}
There are several points to make about this proof. The initial goal is
to supply a term of type \verb|isheq ( transportf E f )|. Now,
this type is itself really of the form (you can see this in the proof
by entering \verb|unfold isheq|):
\begin{center}
\begin{coqcode}
total (fun f' : E b' -> E b => dirprod (homot (funcomp f' (transportf
E f)) (idfun (E b'))) (homot (funcomp (transportf E f) f') (idfun (E b))))
\end{coqcode}
\end{center}
and in general to construct a term of type \verb|total E|, for
\verb|E : B -> UU|, it suffices (by virtue of the definition of
\verb|total|) to give a term \verb|b| of type
\verb|B| together with a term of type \verb|E b|. This is
captured in Coq by the command \verb|split with| and one should
think of \verb|split with b| as saying to Coq that you will
construct the required term using \verb|b| as the term of type
\verb|B| you are after. Upon using this command, the goal will
automatically be updated to \verb|E b|. In this case,
entering \verb|split with ( transportb E f)| is the way to tell
Coq that we take \verb|transportb E f| to be the homotopy inverse
of \verb|transportf E f|. So, after entering this
command the new goal becomes
\begin{center}
\begin{coqcode}
dirprod
(homot (funcomp (transportb E f) (transportf E f)) (idfun (E b')))
(homot (funcomp (transportf E f) (transportb E f)) (idfun (E b)))
\end{coqcode}
\end{center}
As with \verb|total E|, in order to construct a term of type
\verb|dirprod A B| it suffices to supply terms of both types
\verb|A| and \verb|B|. When given a goal of
the form \verb|dirprod A B|, we use the \verb|split| tactic
to tell Coq that we will supply separately the
terms of type \verb|A| and \verb|B| individually (as opposed
to providing a term by some other means). (See Figure \ref{figure:isheqtransportf}
for the result of applying both \verb|split with| and
\verb|split| in the particular proof we are considering.)
The final new ingredient from the proof of \verb|isheqtransportf|
is the appearance of the tactic \verb|apply|. When you have
proved a result in Coq and you are later
given a goal which is a (more or less direct) consequence of that
the result, then the tactic \verb|apply| will allow
you to apply the result. In this case, the lemmas
\verb|backandforth| and \verb|forthandback| are exactly the
lemmas required in order to prove the remaining subgoals.
\subsection{Paths in the total space}
Using transport it is possible to give a complete characterization of
paths in the total space of a fibration \verb|E : B -> UU|.
Along these lines, the following lemma gives sufficient conditions for
the existence of a path in the total space:
\begin{center}
\begin{coqcode}
Lemma pathintotalfiber { B : UU } { E : B -> UU } { x y : total E } ( f : paths ( pr1 x ) ( pr1 y ) ) ( g : paths ( transportf E f ( pr2 x ) ) ( pr2 y ) ) : paths x y.
Proof.
intros. destruct x as [ x0 x1 ]. destruct y as [ y0 y1 ].
simpl in *. destruct f. destruct g. apply idpath.
Defined.
\end{coqcode}
\end{center}
This lemma shows that, given points \verb|x| and \verb|y| of
the total space, in order to construct a path from \verb|x| to
\verb|y| it suffices to provide the following data:
\begin{itemize}
\item a path \verb|f| from \verb|pr1 x| to \verb|pr1 y|; and
\item a path \verb|g| from the result of transporting
\verb|pr2 x| along \verb|f| to \verb|pr2 y|.
\end{itemize}
This is illustrated in Figure \ref{fig:pathintotalfiber} in the
special case where \verb|x| is the pair \verb|pair b e| and
\verb|y| is the pair \verb|pair b' e'|.
Regarding the proof of \verb|pathintotalfiber|, it is worth
mentioning that here the effect of applying
\verb|destruct x as [ x0 x1 ]| is that it tells Coq that we would like to consider the case where
\verb|x| is really of the form \verb|pair x0 x1|. The only new
tactic here is \verb|simpl in *| which tells Coq to make any possible
simplifications to the terms appearing in the goal or hypotheses.
For example, in this case, Coq will simplify
\verb|(pr1 (pair x0 x1))| to \verb|x0|.
\begin{figure}[H]
\centering
\begin{tikzpicture}[color=mydark, fill=mylight, line width=1pt,scale=.75]
\draw[fill=mylight] plot [smooth cycle,tension=.75] coordinates
{(-2.5,0) (-2,1.5) (0,1.25) (1.75,1.5) (2.75,0) (1.75,-1) (-.5,0)};
\draw[->-=.5,line width=.7pt] plot [smooth,tension=.5]
coordinates { (-1.75,.5) (-1,.8)(1,0) (1.5,.25) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=4pt,minimum size=.75mm] (aa)
at (-1.75,.5) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum size=.75mm] (bb)
at (1.5,.25) {};
\draw[dotted,line width=.6pt] (-1.75,3) to (aa);
\draw[dotted,line width=.6pt] (1.5,3) to (bb);
\draw[fill=mylight!50,line width=.8pt] plot [smooth cycle,tension=-.15] coordinates
{(-2.5,2.5) (-2.5,4.5) (-1,4.5) (-1,2.5) };
\draw[fill=mylight!50,line width=.8pt] plot [smooth cycle,tension=-.15] coordinates
{(.75,2) (.75,4) (2.25,4) (2.25,2) };
\draw[->-=.5,line width=.6pt] plot
[smooth,tension=1] coordinates { (1.5,3.5) (1.3,3.25) (1.4,2.75) (1.25,2.5) };
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (ee)
at (-2,3.8) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (eef) at (1.5,3.5) {};
\node[circle,fill=mydark,inner sep=0pt,outer sep=2pt,minimum
size=.75mm] (eeprime) at (1.25,2.5) {};
\draw[->,dotted,line width=.6pt] (ee) to (eef);
\node[font=\tiny] at (-2.15,3.9) {$\scriptstyle e$};
\node[font=\tiny] at (1.75,3.7) {$\scriptstyle f_{!}(\!e)$};
\node[font=\tiny] at (1.55,2.8) {$\scriptstyle g$};
\node[font=\tiny] at (1.1,2.55) {$\scriptstyle e'$};
\node at (-1.95,.6) {$\scriptstyle b$};
\node at (1.7,.4) {$\scriptstyle b'$};
\node[font=\tiny] at (.25,.5) {$f$};
\node[font=\tiny] at (-2.95,4.5) {$E(b)$};
\node[font=\tiny] at (2.75,4) {$E(b')$};
\node[font=\footnotesize] at (-2.55,1.45) {$B$};
\end{tikzpicture}
\caption{Paths in the total space.}
\label{fig:pathintotalfiber}
\end{figure}
On the other hand, if we are given a path \verb|f| from
\verb|x| to \verb|y| in the total space, there is an induced
path in the base given by
\begin{center}
\begin{coqcode}
Definition pathintotalfiberpr1 { B : UU } { E : B -> UU } { x y : total E } ( f : paths x y ) : paths ( pr1 x ) ( pr1 y ) := pr1 ` f.
\end{coqcode}
\end{center}
Furthermore, we may transport \verb|pr2 x| along
\verb|pathintotalfiberpr1 f| and there is a path from the
resulting term to \verb|pr2 y|:
\begin{center}
\begin{coqcode}
Definition pathintotalfiberpr2 { B : UU } { E : B -> UU } { x y : total E } ( f : paths x y ) : paths (transportf E ( pathintotalfiberpr1 f ) ( pr2 x )) ( pr2 y ).
Proof.
intros. destruct f. apply idpath.
Defined.
\end{coqcode}
\end{center}
Finally, we prove that every path in the total space is homotopic to
one obtained using \verb|pathintotalfiber|:
\begin{center}
\begin{coqcode}
Lemma pathintotalfibercharacterization { B : UU } { E : B -> UU } { x
y : total E } ( f : paths x y ) : paths f (pathintotalfiber (pathintotalfiberpr1 f) (pathintotalfiberpr2 f) ).
Proof.
intros. destruct f. destruct x as [ x0 x1 ]. apply idpath.
Defined.
\end{coqcode}
\end{center} | 0.002213 |
It’s not often that you have a meal while dining out and, after consideration, declare it to be the best meal of its type that you have ever tasted. But that’s what my decision was after lunch at Harare’s Centurion restaurant this week: quite the best oxtail I have ever enjoyed!
I had not been to this restaurant for three years, so was pleased to be there this past Monday. It’s situated in the Harare Sports Club, overlooking the cricket ground that is framed by the tall blocks of apartments along Josiah Tongogara Avenue.
We started off on the veranda, but winds rose up and clouds darkened, and soon enough there was a huge downpour.
We moved inside and our table was on the wonderful original sprung dance floor that the Harare Sports Club once used for Saturday night dances. One of the misperceptions about the place needs to be dispelled: you do not have to be a member of the club to dine there; it is open to the public and when we were there, all of the diners were casual visitors like my guest and I.
It was a relatively quiet day, a contrast to the last time we went, when the place was buzzing; my choice of Monday was deliberate, as this seems to be the best day on which to be able to have a chat to Lance Nettleton, who has run the restaurant since it opened almost a decade ago.
Last time there was a December 31, and Lance was gearing up for that night’s new year celebration, so we had no opportunity to catch up. It seems fairly busy to busy most of the time and I hope this remains the case at a time when business levels are down for most dining venues. some lunchtimes, as well as specials listed on a board. We had excellent service from our waitress, Tabu Chilongo, and we managed our meal within a time frame that would have allowed to go back to work well in time for the ‘afternoon whistle.
We went straight for mains, with my guest selecting the eisbein and I selected the oxtail. My guest had previously enjoyed eisbein at Garfunkel’s Grill in Borrowdale, so we were keen to see how it compared.
It was a huge portion and she declared it every bit as good as her previous experience. My oxtail was simply outstanding: tender, tasty and very satisfying, with a delightful gravy. Both meals were served with excellent mashed potato and pumpkin.
No room for dessert after that, but Tabu persuaded us to have Irish coffees, which were a great end to a superb lunch.
I perused the menu and noted breakfasts from $8 to $32, starters from $9 to $26, main courses from $18 (quarter chicken and bangers and mash) to $56 (prawns), with a great many tantalising options in between. Grills of beef fillet, t-bone and sirloin, or pork chops, as well as steak, egg and chips, were priced between $28 and $43.
The outside area adjacent to the cricket pitch
Burgers ranged from $16 to $23, while baskets and platters were $19 to $55 and a range of seven different pizzas were $16 to $26, with additional for extra toppings.
Rolls and toasties were $8 to $32 and a set of traditional dishes were $17 to $26. Desserts were priced between $8 and $14. It’s not cheap, but one thing about this restaurant is its reputation for good portion sizes.
The venue is open daily for breakfast, lunch and dinner, from 8.30am and the kitchen closes at 11pm, which is certainly later than most restaurant kitchens in Harare and is a welcome option for folk who like to eat a wee bit later than the standard 7pm or 8pm here.
Lance told us evening activities provided variety: Tuesday is quiz night, while Thursday is karaoke night. Wednesday, Friday and Saturday nights are music nights, with a DJ in attendance, while Sundays have an appeal to family lunches.
There’s a VIP lounge on site, set up separately from the indoor dining room, the main bar and the veranda area, and I understand that this is popular with many drinkers.
I have never been there with a cricket match on at Harare Sports Club, but I can imagine the buzz factor reaches its high point when that happens.
We thoroughly enjoyed everything about our experience: excellent food, superb service and a friendly ambience. The Centurion pub and grill, Harare Sports Club (entrance on Fifth Street). | 0.085568 |
\begin{document}
\title{First cohomology and local rigidity\\ of group actions}
\acknowledgements{Author partially supported by NSF grants
DMS-0226121 and DMS-0541917.}
\author{David Fisher}
\institution{Department of Mathematics, Indiana University, Bloomington, IN 47405\\
\email{fisherdm@indiana.edu}}
\begin{abstract}
Given a topological group $G$, and a finitely generated group
$\G$, a homomorphism $\pi:\G{\rightarrow}G$ is {\em locally rigid}
if any nearby by homomorphism $\pi'$ is conjugate to $\pi$ by a
small element of $G$. In 1964, Weil gave a criterion for local
rigidity of a homomorphism from a finitely generated group $\G$ to
a finite dimensional Lie group $G$ in terms of cohomology of $\G$
with coefficients in the Lie algebra of $G$. Here we generalize
Weil's result to a class of homomorphisms into certain infinite
dimensional Lie groups, namely groups of diffeomorphism compact
manifolds. This gives a criterion for local rigidity of group
actions which implies local rigidity of: $(1)$ all isometric
actions of groups with property $(T)$, $(2)$ all isometric actions
of irreducible lattices in products of simple Lie groups and
certain more general locally compact groups and $(3)$ a certain
class of isometric actions of a certain class of cocompact
lattices in $SU(1,n)$.
\end{abstract}
\section{A cohomological criterion for local rigidity and applications}
\label{section:results}
In 1964, Andr\'e Weil showed that a homomorphism $\pi$ from a
finitely generated group $\Gamma$ to a Lie group $G$ is {\em
locally rigid} whenever $H^1(\G,\mathfrak g)=0$. Here $\pi$ is
locally rigid if any nearby homomorphism is conjugate to $\pi$ by
a small element of $G$, $\mathfrak g$ is the Lie algebra of $G$,
and $\G$ acts on $\mathfrak g$ by the composition of $\pi$ and
the adjoint representation of $G$. Weil's proof
also applies to $G$ an algebraic group over a local field of
characteristic zero, but his use of the implicit function theorem
forced $G$ to be finite dimensional. Here we prove the following
generalization of Weil's theorem to some cases where $G$ is an
infinite dimensional Lie group.
\begin{theorem}
\label{theorem:implicit} Let $\Gamma$ be a finitely presented
group, $(M,g)$ a compact Riemannian manifold and
$\pi:\G{\rightarrow}\Isom(M,g){\subset}\Diff^{\infty}(M)$ a
homomorphism. If $H^1(\G,\Vect^{\infty}(M))=0$, the homomorphism
$\pi$ is locally rigid as a homomorphism into $\Diff^{\infty}(M)$.
\end{theorem}
\noindent{\bf Remarks:}\begin{enumerate} \item We also state a
variant of Theorem \ref{theorem:implicit} where $\pi(\G)$ is not
assumed to preserve a metric. This result, Theorem
\ref{theorem:affine}, gives a completely general condition for
local ridigity of $\pi:\G{\rightarrow}\Diff^{\infty}(M)$ in terms
of ``effective vanishing" of $H^1(\G,\Vect^{\infty}(M))$. As the
paper \cite{H1} led to a long series of local rigidity theorems
proved using techniques of hyperbolic dynamics, this paper, and
particularly Theorem \ref{theorem:affine}, open the way for a
wide-ranging geometric approach to questions of local rigidity.
See the end of subsection \ref{subsection:affine} as well as
\cite{F2} for discussion.
\item Variants of the theorem where $\Diff^{\infty}(M)$ is
replaced by a tame Frechet subgroup are also true and follow from
the same proof. See subsection \ref{subsection:subgroups} for
discussion.
\item The $\Gamma$ action on $\Vect^{\infty}(M)$ will be denoted
by $d\pi$ and
$(d\pi(\gamma)V)(x)=D\pi(\gamma)_{\pi(\gamma){\inv}x}V(\pi(\gamma){\inv}x)$
where $Df$ is the differential of $f$. That this yields a
$\Gamma$ action is simply the chain rule.
\end{enumerate}
\noindent One can rephrase Theorem \ref{theorem:implicit}
dynamically. The homomorphism $\pi$ defines an action of $\G$ on $M$
by isometries and we will abuse notation by using $\pi$ for both the
action and the homomorphism. Theorem \ref{theorem:implicit} says
that any $C^{\infty}$ action $\pi'$ of $\G$ which is $C^{\infty}$
close to $\pi$ (which is equivalent to a homomorphism
$\pi':\G{\rightarrow}\Diff^{\infty}(M)$ which is close to $\pi$) is
conjugate to $\pi$ by a small $C^{\infty}$ diffeomorphism. In e.g.
\cite{FM2,FM3}, this condition is called {\em $C^{\infty,\infty}$
local rigidity} of $\pi$. I discuss applications in this dynamical
language. The question of whether one could prove local rigidity of
a group action by proving vanishing of $H^1(\G,\Vect(M))$ was first
raised in \cite{Z6}, see also \cite{Z4}. This question has remained
open, but many authors studied conditions under which
$H^1(\G,\Vect(M))$ vanished, with a variety of assumptions on the
regularity of the vector fields, see for example
\cite{H,Ko,L,LZ,Q,Z4}. Vanishing of $H^1(\G,\Vect(M))$ was labelled
{\em infinitesimal rigidity} in the hope that vanishing of $H^1$
would imply local rigidity. The results in \cite[Section 4]{LZ}
apply to isometric actions and so that cohomology vanishing theorem
yields an application of Theorem \ref{theorem:implicit}. As remarked
above Theorem \ref{theorem:affine} below should have applications in
contexts similar to those of \cite{H,Ko,L,Q,Z4}, but these
applications require a stronger condition than vanishing of
$H^1(\G,\Vect^{\infty}(M))$.
To give applications of Theorem \ref{theorem:implicit}, in section
\ref{section:applications}, I establish Criterion
\ref{criterion:vanishing}, which strengthens \cite[Theorem
4.1]{LZ}. I will now describe three applications of Theorem
\ref{theorem:implicit}, all of which are proven using Criterion
\ref{criterion:vanishing} to show vanishing of
$H^1(\Vect^{\infty}(M))$. Criterion \ref{criterion:vanishing} can
be verified easily for many actions using results on vanishing of
finite dimensional cohomology groups in combination with
elementary results on equidistribution. Using Criterion
\ref{criterion:vanishing} or \cite[Theorem 4.1]{LZ}, we obtain a
more geometric proof of the following:
\begin{theorem}\cite{FM2}
\label{theorem:isomrigid} Let $\Gamma$ be a discrete group with
property $(T)$. Then any $C^{\infty}$ Riemannian isometric action
of $\G$ on a compact manifold is $C^{\infty,\infty}$ locally
rigid.
\end{theorem}
\noindent Another, more novel, application is:
\begin{theorem}
\label{theorem:irredlattices} Let $\G$ be an irreducible lattice
in a semisimple Lie group $G$ with real rank at least two. Then
any $C^{\infty}$ Riemannian isometric action of $\G$ on a compact
manifold is $C^{\infty,\infty}$ locally rigid.
\end{theorem}
\noindent Theorem \ref{theorem:irredlattices} applies to irreducible
lattices in products of rank $1$ groups. Not all of these groups
have property $(T)$ and so this theorem has many applications not
covered by Theorem \ref{theorem:isomrigid}. The simplest new group
to whose actions this theorem applies is $SL(2,\Za[\sqrt{2}])$ and
one can construct many interesting examples of groups and actions
not covered by previous work. In fact, Theorem
\ref{theorem:irredlattices} applies much more generally than stated.
First $\G$ can be an irreducible $S$-arithmetic lattice in a product
of simple algebraic groups over different fields. Also, using a
result from \cite{Md}, the result can be extended to apply to
irreducible lattices in fairly general products of locally compact
topological groups. By results in \cite{Md}, unless $\G$ is
actually $S$-arithmetic, all isometric $\G$ actions on compact
manifolds factor through a finite quotient of $\G$, but there is no
a priori reason that such isometric actions are not ``close to"
faithful, non-isometric actions. For precise statements of these
variants of Theorem \ref{theorem:irredlattices}, see subsection
\ref{subsection:irredlattices}.
For certain cocompact arithmetic lattices $\G$ in a simple group
$G$, the arithmetic structure of $\G$ comes from a realization of
$\G$ as the integer points in $G{\times}K$ where $K$ is a compact
Lie group. In this case it always true that the projection to $G$ is
a lattice and the projection to $K$ is dense. We say a $\G$ action
is {\em arithmetic} if it is defined by projecting $\G$ to $K$,
letting $K$ act by $C^{\infty}$ diffeomorphisms on a compact
manifold $M$ and restricting the $K$ action to $\G$. Using deep
results of Clozel \cite[Theorem 3.2 and 3.5]{Cl1} concerning
automorphic forms, Theorem \ref{theorem:implicit}, and Criterion
\ref{criterion:vanishing}, one has:
\begin{theorem}
\label{theorem:clozel} For certain congruence lattices
$\G<SU(1,n)$, any arithmetic action of $\G$ is $C^{\infty,\infty}$
locally rigid.
\end{theorem}
\noindent Here by {\em congruence lattice}, we mean an arithmetic
lattice which contains a congruence subgroup as a subgroup of finite
index. The class of lattices to which Theorem \ref{theorem:clozel}
applies are called fundamental groups of Kottwitz varieties in
\cite{Cl1} and are described below in subsection
\ref{subsection:clozel}. Interestingly, some cocompact congruence
lattices in $SU(1,n)$ have homomorphisms $\rho$ to $\Za$
\cite{Ka,BW}, and so have arithmetic actions with deformations
provided the centralizer $Z$ of $K$ in $\Diff^{\infty}(M)$ is
non-trivial. Having centralizer allows one to deform the action
along the image of the homomorphism
$\rho{\circ}\sigma_t:F{\rightarrow}Z$ where
$\sigma_t:\Za{\rightarrow}Z$ is any one parameter family of
homomorphisms. This construction can also be applied to actions of
lattices in $SO(1,n)$ where having a homomorphism to $\Za$ is much
more common, see e.g. \cite{Lu}. For $\G$ a lattice in $SU(1,n)$ I
know of no example of a faithful isometric $\G$ action with trivial
centralizer which is not locally rigid. In general the question of
when cohomology classes in $H^1(\G,\Vect(M))$ integrate to
deformations is quite difficult, see \cite{F4} for a discussion of
some examples. In particular, that paper describes isometric
actions of lattices in $SO(1,n)$ on $S^n$ which have no centralizer
and admit many deformations.
The proof of Theorem \ref{theorem:implicit}, while modelled on
Weil's proof of his results in \cite{We3}, is significantly more
difficult. The key step in Weil's proof is an application of the
implicit function theorem. Our work requires that we use the more
difficult and delicate implicit function theorem of Hamilton in
place of the standard implicit function theorem \cite{Ha,Ha2}. To
apply Hamilton's theorem, I work with a tame, locally surjective,
exponential map from $\Vect^{\infty}(M)$ to $\Diff^{\infty}(M)$.
This exponential map is defined by taking a vector field $V$ to the
map $\Exp(V)(x)=\exp_xV_x$ where $\exp_x:TM_x{\rightarrow}M$ is the
exponential map for some affine connection on $M$. It is important
not to use the map defined by flowing to time one along $V$, as this
has bad geometric properties, e.g. \cite[I.5.5.2]{Ha}. Another step
in following Weil's argument that is also substantially more
difficult in this setting is the computation relating the formal
output of the implicit function theorem, recorded here as
Proposition \ref{proposition:fromhamilton}, to cohomology of $\G$ in
some module. In Weil's setting, this follows almost trivially from
equivariance of the exponential map. In order to use Hamilton's
implicit function theorem, we work with an exponential map which is
not equivariant, and so the analogous computation is more
complicated, see subsection \ref{subsection:affine}. The exact
criterion for local rigidity in terms of cohomology of $\G$ with
coefficients in $\Vect^{\infty}(M)$ that results from this
computation and the implicit function theorem is Theorem
\ref{theorem:affine} which should have further applications, see
remarks at the end of subsection \ref{subsection:affine}. The main
technical difficulty in proving Theorem \ref{theorem:implicit} from
Theorem \ref{theorem:affine} is producing a tame splittings of a
sequence of tame linear maps and this is the only point at which I
use the fact that $\pi(\G)$ preserves a metric. Tameness is an
assumption on Fr\'{e}chet spaces, manifolds and maps between them
that is necessary for Hamilton's Nash-Moser implicit function
theorem \cite{Ha,Ha2}, see subsection \ref{subsection:tameness} for
discussion.
{\em Acknowledgements:} This work has benefited from conversations
with many people concerning the $KAM$ method and the related
implicit function theorems. Of particular importance were
conversations with Jerome Benveniste, Dmitry Dolgopyat, Bassam
Fayad, Rafael Krikorian, Gregory Margulis, Peter Sarnak and
Laurent Stolovitch. Also, thanks to Wee Teck Gan, Hee Oh, and
Gregory Margulis for conversations concerning Clozel's work on
property $\tau$, to Emmanuel Breuillard and Dmitry Dolgopyat for
discussions concerning equidistribution and tame estimates, and to
Steve Hurder and Bob Zimmer for useful comments. Special thanks to
Laurent Stolovitch for many useful comments on earlier versions on
this paper and even more useful conversations.
An early draft of this paper contained a gap. That gap is fixed by
the use, in the proof of Corollary \ref{corollary:diophantine}, of
results concerning linear forms on logarithms. Thanks to Jordan
Ellenberg, Yuri Nesterenko, Rob Tijdeman and Michel Waldschmidt
for their help finding my way in the literature on this topic.
\section{Theorem \ref{theorem:implicit}: variants and proof}
\label{section:proofsketch}
\subsection{Background on tameness and Hamilton's implicit
function theorem}\label{subsection:tameness}
In this section we introduce the terminology of graded tame
Fr\'{e}chet spaces, tame maps and tame manifolds. The next
several definitions are excerpted from \cite{Ha}, where they are
discussed in great detail with many examples. The reader
interested in understanding the tame Fr\'{e}chet category cannot
do better than to study that paper.
\begin{defn}\cite[Definition II.1.1.1]{Ha}
\label{definition:graded} A Fr\'{e}chet space is graded if the
topology is defined by sequence of semi-norms $\|{\cdot}\|_n$
where $n{\in}{\Za}^+$ such that
$\|{\cdot}\|_m{\leq}\|{\cdot}\|_{m+k}$ for any positive integers
$m$ and $k$.
\end{defn}
\noindent The reader should have in mind the seminorms on smooth
functions or vector fields defined either by the $C^n$ norms or
the $W^{2,n}$ Sobolev norms. With either of these collections of
norms, vector fields or functions become a graded Fr\'{e}chet
space. See \cite[Section 4]{FM2} or \cite[Section 9]{Palais} for
explicit, intrinsic constructions of these norms on a Riemannian
manifold $(M,g)$. In particular, it follows from the discussion in
\cite[Section 4]{FM2} that isometries of the metric on $M$ induce
isometries of all the associated norms. In order to be able to
use Hamilton's work, we need to use a more refined structure on
these spaces.
\begin{defn}\cite[Definitions II.1.2.1 and II.1.3.1]{Ha}
\label{definition:tamelinear} \begin{enumerate} \item A map $L$
from a graded Fr\'{e}chet space $V$ to a graded Fr\'{e}chet space
$W$ is called {\em tame} with {\em tame estimate} of degree $r$
and base $b$ if
$$\|Lv\|_k{\leq}C_k\|v\|_{k+r}$$
for all $k{\geq}b$. Here $C_k$ is a constant depending only on
$k$ and $L$. \item We say a graded Fr\'{e}chet space $V$ is a
{\em tame direct summand} of a graded Fr\'{e}chet space $W$ if
there are tame linear maps $F:V{\rightarrow}W$ and
$G:W{\rightarrow}V$ such that $F{\circ}G$ is the identity on $V$.
\end{enumerate}
\end{defn}
\noindent As in the second half of Definition
\ref{definition:tamelinear}, we often omit the degree and base of
a tame linear map when they are not essential to arguments or
ideas.
We now define tame Fr\'{e}chet spaces. Given a Banach space $B$,
we let $\tilde \Sigma(B)$ be the space of sequences in $B$. We
define
$$\|\{f_k\}\|_n=\sum_{k=0}^{\infty}e^{nk}\|f_k\|_B$$
for non-negative integers $n$ and restrict our attention to the
subspace $\Sigma(B)\subset\tilde \Sigma(B)$ where $\|{\cdot}\|_n$ is
finite for all $n$. On $\Sigma(B)$, the $\|{\cdot}\|_n$ are an
increasing family of seminorms and so makes ${\Sigma(B)}$ into a
graded Fr\'{e}chet space. Hamilton frequently refers to
$\Sigma(B)$ as the space of exponentially decreasing sequences in
$B$, though super-exponentially decreasing is more accurate.
\begin{defn}\cite[Definitions II.1.3.2]{Ha}
\label{defn:tamefrechetspace} We call a graded Fr\'{e}chet space
$V$ a {\em tame Fr\'{e}chet space} if it is a tame direct summand
of a space $\Sigma(B)$, where $B$ is a Banach space and
$\Sigma(B)$ is equipped with the family of seminorms defined
above.
\end{defn}
\noindent For any compact manifold $M$ and any vector bundle $E$
over $M$, the space of smooth sections of $E$ is a tame Fr\'{e}chet
spaces by \cite[Theorem II.1.3.6 and Corollary II.1.3.9]{Ha}.
Hamilton's proof of these results proceeds by embedding $M$ in some
Euclidean space and using the Fourier transform. We now sketch an
alternate, intrinsic proof of this fact. We let $H=L^2(M,\nu,E)$ be
the space of $L^2$ sections of $E$ with $\nu$ a Riemannian volume on
$M$ defined by a fixed choice of Riemannian metric $g$, and then
form the space $\Sigma(H)$. Let $\Delta$ be the Laplacian on $E$ and
let $E_i$ be the eigenspace corresponding to eigenvalue $\lambda_i$
and let
$$V_k=\sum_{\exp(k)<1+\lambda_i}^{1+\lambda_i<\exp(k+1)}E_i.$$ If
$\sigma$ is a smooth section of $E$, then we can write
$\sigma=\{\sigma_i\}$ where each $\sigma_i{\in}E_i$ and
$$\sum_{i=0}^{\infty}(1+\lambda_i)^l\|\sigma_i\|_2<\infty$$
for all non-negative integers $l$. In fact, this sum, for a given
value of $l$, is exactly $\|\sigma\|_{2,l}$, the $l$th Sobolev
norm of $\sigma$. If we define $\sigma_k$ to be the component of
$\sigma$ in $V_k$, we can then define
$F:C^{\infty}(M,E){\rightarrow}\Sigma(H)$ by
$\sigma{\rightarrow}\{\sigma_k\}$ and
$G:\Sigma(H){\rightarrow}C^{\infty}(M,E)$ by taking
$\{\sigma_k\}{\rightarrow}\sum_{k=0}^{\infty}\sigma_k$. It is
clear that $F{\circ}G$ is the identity on $C^{\infty}(M,E)$.
Tameness of both maps is immediate if we consider the Sobolev
norms on $C^{\infty}(M,E)$ and follows from the Sobolev embedding
theorems for the uniform norms.
To move out of the linear category, we need a definition of
tameness for non-linear maps.
\begin{defn}{\cite[II.2.1.1]{Ha}}
\label{definition:tamenonlinear} Let $V$ and $W$ be graded
Fr\'{e}chet spaces, $U$ an open subset of $V$ and
$P:U{\rightarrow}W$ a non-linear map. Then we say $P$ satisfies a
{\em tame estimate} of degree $r$ and base $b$ if
$\|P(v)\|_n{\leq}C_n(1+\|v\|_{n+r})$ for all $n{\geq}b$. Here
$C_n$ is a constant depending only on $P$ and $n$. We say a map
$P$ is tame if it is continuous and satisfies some tame estimate
on a neighborhood of every point.
\end{defn}
The notion of a smooth tame manifold is now quite natural. It is
a manifold with charts in a tame Fr\'{e}chet space where the
transition functions are smooth tame maps. Similarly, a smooth
tame Lie group is a group which is a smooth tame manifold where
the multiplication and inverse maps are smooth and tame. For $M$ a
compact manifold, $\Diff^{\infty}(M)$ is proven to be a smooth
tame Lie group in \cite[Theorem II.2.3.5]{Ha}. We remark that it
is possible to see the tame structure on $\Diff^{\infty}(M)$ by
choosing an affine connection on $M$ and defining charts at the
identity as maps from $\Vect^{\infty}(M)$ defined by the
exponential map associated to the connection. This is equivalent
to what is done in \cite{Ha}.
We now recall the statement of Hamilton's implicit function
theorem for short exact sequences.
\begin{theorem}\cite[Theorem III.3.1.1]{Ha}
\label{theorem:hamilton} Let $A,B$ and $C$ be tame Fr\'{e}chet
spaces and $U,V$ and $W$ open sets in $A,B$ and $C$ respectively.
Let
$$\xymatrix{
U\ar[r]^{F}&{V}\ar[r]^{G}&{W}&\\
}$$ \noindent where $F$ and $G$ are smooth tame maps and
$G{\circ}F=0$. Suppose for each $f{\in}{U}$ the image of $DF_f$ is
the entire null space of $DG_{F(f)}$. Further suppose there are
smooth tame maps
$$VF:U{\times}B{\rightarrow}A \hskip .5in
VG:U{\times}C{\rightarrow}B$$ \noindent such that
$VF_f:B{\rightarrow}A$ and $VG_f:C{\rightarrow}B$ are linear for
every $f{\in}U$ and
$$DF_f{\circ}VF_f+VG_f{\circ}DG_{P(f)}=\Id_B$$
for all $f{\in}U$. Then for any $f_0{\in}U$ the image of a
neighborhood of $f_0$ fills out a neighborhood of a $g_0=F(f_0)$
in the subset of $B$ where $G(g)=0$. Moreover, we can find a
smooth tame map $S:V'{\rightarrow}U'$ from a neighborhood $V'$ of
$g_0$ to a neighborhood $U'$ of $f_0$ such that $F{\circ}S(g)=g$
for any $g$ in $V'$ where $G(g)=0$.
\end{theorem}
\noindent The proof of this theorem is section $2$ of \cite{Ha2}.
There is a more intrinsic statement of this result as
\cite[Theorem III.3.1.2]{Ha} which does not depend on a choice of
basepoint $0$ in $C$ to write the equation $F{\circ}G=0$. In our
context the group structure on the spaces we consider defines
natural choices of basepoints and so we can work with the simpler
statement given here. In our applications, we will be able to use
a group action to reduce the problem of constructing $VF$ and $VG$
to constructing $VF_{f_0}$ and $VG_{f_0}$. More precisely, we use
the following lemma which is essentially \cite[Lemma 4.3]{Be}, see
also \cite{Ha3}.
\begin{lemma}
\label{lemma:equivariance} Let $A,B,C,U,V,W$ and $F,G$ be as in
Theorem \ref{theorem:hamilton}. Assume there is a smooth tame Lie
group $D$ and local smooth tame $D$ actions on $U,V$ and $W$ such
that the action on $U$ is simply transitive and $F$ and $G$ are
equivariant. Then to obtain the conclusions of Theorem
\ref{theorem:hamilton} it suffices to produce smooth tame linear
maps $VF_{f_0}:B{\rightarrow}A$ and $VG_{f_0}:C{\rightarrow}B$
such that
$$DF_{f_o}{\circ}VF_{f_0}+VG_{f_0}{\circ}DG_{F(f_0)}=\Id_B.$$
\end{lemma}
\noindent The point of the lemma is that one need only find the
tame splitting of the complex at a single point rather than in a
neighborhood. To prove the lemma, one simple takes the tame
splitting at $f_0$ and produces a tame splitting in the
neighborhood by translating the splitting by the $D$ action. We
refer the reader to \cite{Be} for a more detailed proof.
\subsection{A variant of Theorem
\ref{theorem:implicit} for arbitrary smooth actions}
\label{subsection:affine}
Let $M$ be a compact manifold and $\Vect^{\infty}(M)$ the graded Fr\'{e}chet
space of $C^{\infty}$ vector fields on $M$. Given an affine
connection $\nabla$ there is an $\Aff(M,\nabla)$ equivariant, tame
exponential map $\Exp$ from $\Vect^{\infty}(M)$ to $C^{\infty}(M,M)$
defined by $\Exp(V)(x)=\exp_x(V_x)$ where $\exp_x$ is the
exponential map defined by $\nabla$. The map $\Exp$ is a
diffeomorphism from a neighborhood $W$ of the zero vector field to a
neighborhood $U$ of the identity in
$\Diff^{\infty}(M){\subset}C^{\infty,\infty}(M)$. Note that if
$\nabla$ is not complete, the map $\Exp$ may not be defined on all
of $\Vect^{\infty}(M)$, but this is not relevant to the arguments
given here. For our purposes it always suffices to work with the
Levi-Civita connection defined by some metric.
For the remainder of this section, we fix a finitely presented group
$\G$ and a presentation of $\G$. This is a finite collection $S$ of
generators $\g_1,{\ldots},\g_k$ and finite collection $R$ of
relators $w_1,{\ldots},w_r$ where each $w_i$ is a finite word in the
$\g_j$ and their inverses. More formally, we can view each $w_i$ as
a word in an alphabet on $k$ letters and their formal inverses. Let
$\pi:\G{\rightarrow}\Diff^{\infty}(M)$ be a homomorphism, which we
can identify with a point in $\Diff^{\infty}(M)^k$ by taking the
images of the generators. We have a complex:
\begin{equation}
\label{equation:initialsequence} \Seqinitial
\end{equation}
\noindent Where $P$ is defined by taking
$\psi$ to $(\psi\pi(\g_1)\psi{\inv}, {\ldots},
\psi\pi(\g_k)\psi{\inv})$ and $Q$ is defined by viewing each $w_i$
as a word in $k$ letters and taking $(\psi_1,{\ldots},\psi_k)$ to
$(w_1(\psi_1,{\ldots},\psi_k),{\ldots},w_r(\psi_1,{\ldots},\psi_k))$.
Letting $\Id$ be the identity map on $M$, it follows that
$P(\Id)=\pi$ and $Q(\pi)=(\Id,{\ldots},\Id)$. Note that
$Q{\inv}(\Id,{\ldots},\Id)$ is exactly the space of $\G$ actions.
While this is a closed subset of $\Diff^{\infty}(M)^k$, it is
unclear that it is a manifold or even a union of manifolds in a
neighborhood of any point. In the finite dimensional setting, this
set is an algebraic variety, and therefore a manifold at ``most"
points.
The tangent spaces of $\Diff^{\infty}(M)$ at any point is
identified with $\Vect^{\infty}(M)$. To avoid notational
confusion, we let $A=\Diff^{\infty}(M), B=\Diff^{\infty}(M)^k$ and
$C=\Diff^{\infty}(M)^r$. Note that $A,B$ and $C$ are tame
manifolds. Then the complex in $(\ref{equation:initialsequence})$
becomes:
\begin{equation}
\label{equation:letters}
\xymatrix{A\ar[r]^{P}&{B}\ar[r]^{Q}&{C}&\\}
\end{equation}
\noindent and we can also consider the derivative complex of the
complex in $(\ref{equation:letters})$:
\begin{equation}
\label{equation:derivativecomplex}
\xymatrix{TA\ar[r]^{DP}&{TB}\ar[r]^{DQ}&{TC}&\\}
\end{equation}
By Theorem \ref{theorem:hamilton}, local rigidity follows if there
exist smooth tame maps $VP$ and $VQ$ that split the sequence:
\begin{equation}
\label{equation:splittingsequence}
{\xymatrix{
{U{\times}\Vect^{\infty}(M)}\ar[r]^{DP}&{U{\times}\Vect^{\infty}(M)^k }\ar[r]^{DQ}&{U{\times}\Vect^{\infty}(M)^r}&\\
}}
\end{equation}
\noindent i.e. tame smooth maps
$VP:U{\times}\Vect^{\infty}(M)^k{\rightarrow}U{\times}\Vect^{\infty}(M)$
and
$VQ:U{\times}\Vect^{\infty}(M)^r{\rightarrow}U{\times}\Vect^{\infty}(M)^k$
such that $DP{\circ}VP+VQ{\circ}DQ$ is the identity on
$\Vect^{\infty}(M)^k$ for any $x{\in}U$. Note that the maps $P$
and $Q$ are $\Diff^{\infty}(M)$ equivariant where
$\Diff^{\infty}(M)$ acts on $A$ by translation and on $B$ and $C$
by conjugation. Since the $\Diff^{\infty}(M)$ action on $A$ is
simply transitive, we can use this to reduce the problem of
finding a splitting in a neighborhood $U$ to the problem of
finding a splitting at $\Id$ by Lemma \ref{lemma:equivariance}
above. I.e. we need only find a splitting of the sequence:
\begin{equation}
\label{equation:splittingsequencepoint} \Seqpoint
\end{equation}
\noindent Note that to this point in the discussion we are not
assuming that $\pi(\G)$ preserves any geometric structure of any
kind. The discussion so far yields the following technical
result.
\begin{proposition}
\label{proposition:fromhamilton} Let $\G$ be a finitely presented
group, $M$ be a compact manifold,
$\pi:\G{\rightarrow}\Diff^{\infty}(M)$ be a homomorphism, and let
$P$ and $Q$ be the maps defined at the beginning of this section.
Then $\pi$ is locally rigid provided the tame linear complex in
$(\ref{equation:splittingsequencepoint})$ is tamely split. In
other words, the action is locally rigid provided there exist maps
$VP:\Vect^{\infty}(M)^k{\rightarrow}\Vect^{\infty}(M)$ and
$VQ:\Vect^{\infty}(M)^r{\rightarrow}\Vect^{\infty}(M)^k$ such that
$$VP{\circ}DP_{\Id}+VQ{\circ}DQ_{\pi}=\Id_{\Vect^{\infty}(M)^k}.$$
\end{proposition}
\noindent {\bf Remark:} To apply Theorem \ref{theorem:hamilton},
we are identifying a neighborhood of the identity in
$\Diff^{\infty}(M)$ with a neighborhood of $0$ in
$\Vect^{\infty}(M)$ via the map $\Exp$. This identification
naturally identifies the point $(\Id,{\ldots},\Id)$ in $C$ with
the $0$ in $\Vect^{\infty}(M)^r$.
We now compute the derivatives $DP_{\Id}$ and $DQ_{\pi}$
explicitly. This relates the sequence in line
$(\ref{equation:splittingsequencepoint})$ to the sequence of
$\Vect^{\infty}(M)$ valued cochains on $\G$. We identify the
cohomology of $\G$ with the simplicial cohomology of a simplicial
$K(\G,1)$ space with one vertex, one edge for each generator of
$\G$ and one two cell for each relator of $\G$. We will write
$C^i(\G,\Vect^{\infty}(M))$ for $i$-cochains on $\G$ with values
in $\Vect^{\infty}(M)$. One can identify $0$-cochains
$C^0(\G,\Vect^{\infty}(M))$ with $\Vect^{\infty}(M)$,
$1$-cochains, $C^1(\G,\Vect^{\infty}(M))$, with maps from $S$ to
$\Vect^{\infty}(M)$ or equivalently $\Vect^{\infty}(M)^k$, and
$2$-cochains, $C^2(\G,\Vect^{\infty}(M))$, with maps from $R$ to
$\Vect^{\infty}(M)$ or equivalently $\Vect^{\infty}(M)^r$. In
these identifications, the (co)differentials $d_0$ and $d_1$ can
be written explicitly as
$$d_0(v)=(v-d\pi(\g_j)v)_{j{\in}J}$$ and
$$d_1(f)=\left(\sum_{k_i=1}^{l(w_i)}\left(\prod_0^{k_i-1}d\pi(\g_{k_i})\right)f(\g_{k_i})\right)_{i{\in}I}$$
where $w_i=\prod_{k_i=1}^{l(w_i)}\g_{k_i}$ and we interpret the
empty product as $d\pi(e)$ where $e{\in}\G$ is the identity. We now
want to show that the complex in
$(\ref{equation:splittingsequencepoint})$ is identical to the
complex:
$${\xymatrix{
{C^0(\G,\Vect^{\infty}(M))}\ar[r]^{d_0}&{C^1(\G,\Vect^{\infty}(M))}\ar[r]^{d_1}&{C^2(\G,\Vect^{\infty}(M))}&\\
}}$$ \noindent Since we already have identifications of spaces
$C^0(\G,\Vect^{\infty}(M))\simeq\Vect^{\infty}(M)$,
$C^1(\G,\Vect^{\infty}(M))\simeq\Vect^{\infty}(M)^k$ and
$C^2(\G,\Vect^{\infty}(M))\simeq\Vect^{\infty}(M)^r$, it suffices
to prove the following:
\begin{proposition}
\label{proposition:cohomology} With the identifications above, we
have
\begin{equation}
\label{equation:d1} DP_{\Id}(v)=d_0(v) \end{equation}
and
\begin{equation}
\label{equation:d2} DQ_{\pi}(v_1,\cdots,v_k)=d_1(f)
\end{equation}
\noindent where $v_i=f(\g_i)$ if
$f{\in}C^1(\G,\Vect^{\infty}(M))$.
\end{proposition}
\noindent To prove the proposition, we need some elementary facts
concerning the exponential map. We fix an affine connection
$\nabla$ on $M$, we choose a cover $\mathcal U$ of $M$ by normal,
convex neighborhoods, i.e. neighborhoods such that any two points
in the neighborhood are joined by a geodesic lying entirely in the
neighborhood. There exists an $\varepsilon>0$ such that we can
choose a further cover $\mathcal V$ such that for any
$V{\in}\mathcal V$ there is a neighborhood $U{\in}\mathcal U$
such that for any $X\in\Vect^{\infty}(M)$ with
$\|X\|_{C^1}<\varepsilon$ the maps $x{\rightarrow}x+X_x$ and
$x{\rightarrow}\exp_xX_x$ define maps from $V$ to $U$. In what
follows, we use this fact to write the difference of two nearby
diffeomorphisms as a vector field.
\begin{lemma}
\label{lemma:quadraticerror} Let $X,Y{\in}\Vect^{\infty}(M)$ and
$V{\subset}U$ as described above. Then for $t$ small enough,
$\Exp(tX)\circ\Exp(tY)=\Exp(t(X+Y))+t^2Z$ on $V$ where $Z$ is some
vector field on $U$ with $\|Z\|_n{\leq}C\max(\|X\|_{n+1},\|Y\|_n)$.
\end{lemma}
\noindent {\bf Proof.} This is a consequence of the defining
equation of geodesics and some elementary calculus. Let
$(x_1,\ldots,x_n)$ be coordinates on a neighborhood $U$ in $M$ and
$(x_1,\ldots,x_n,p_1,{\ldots},p_n)$ coordinates on $TU$. Recall
that the geodesic equation can be written as the pair of coupled
equations:
$$\frac{dx_k}{dt}=p_k$$
\noindent and
$$\frac{dp_k}{dt}=-\sum_{i,j}\Gamma^k_{ij}p_ip_j$$
\noindent where $\Gamma^k_{ij}$ are the Christoffel symbols for the
connection in the local coordinate system. From these formulas, it
is clear that up to a factor that is quadratic in $t$ geodesics are
straight lines in the coordinates. The change in $X$ from $m$ to
$\Exp(Y)(m)$ is of order $t$ which yields another quadratic error
and also accounts for the need to control $\|Z\|_n$ by
$\|X\|_{n+1|}$. \qed
\noindent{\bf Remark:} Lemma \ref{lemma:quadraticerror} implies
that $\Exp(tX)\Exp(-tX)$ is quadratic in $t$ and so
$\Exp(tX){\inv}$ is equal to $\Exp(-tX)$ to first order in $t$.
\begin{lemma}
\label{lemma:differencequotient} For any
$X{\in}\Vect^{\infty}(M)$,
$$\lim_{t{\rightarrow}0}\frac{\Exp(tX)}{t}=X.$$
\end{lemma}
\noindent {\bf Proof.} Again this is a straightforward computation
from the geodesic equation which implies that the difference
between $\Exp(tX)x$ and $x+(tX)_x$ is quadratic in $t$. \qed
\begin{lemma}
\label{lemma:notaffine} For any $X{\in}\Vect^{\infty}(M)$ and any
$\phi{\in}\Diff^{\infty}(M)$, we have that
$\Exp(td\phi(X))=\phi{\circ}\Exp(tX)\circ{\phi}{\inv}+t^2Z$ where
$Z$ is some vector field with $\|Z\|_n{\leq}C\|X\|_n$ where $C$
depends on $M$ and $\phi$.
\end{lemma}
{\noindent}{\bf Remarks:}\begin{enumerate} \item If $\phi$
preserves the connection defining $\Exp$, then we can in fact
choose $Z=0$, i.e. $\Exp$ is equivariant. \item The lemma is not
stated entirely precisely, one should make choices of $U$ and $V$
as in Lemma \ref{lemma:quadraticerror}, and these choices will
also depend on $\phi$.
\end{enumerate}
\noindent {\bf Proof.} This fact can be deduced from standard
computations of the derivative of left and right multiplication in
$\Diff^{\infty}(M)$, see e.g. \cite[Example I.4.4.5]{Ha}. The
point is that the right hand side is $\Exp'(tX)$ where $\Exp'$ is
the exponential map defined by pushing forward the connection via
$\phi$. It is easy to see from the geodesic equations as in the
proof of \ref{lemma:quadraticerror} that the difference between
$\Exp'(tX)$ and $\Exp(tX)$ is quadratic in $t$. \qed
\noindent {\bf Proof of Proposition \ref{proposition:cohomology}.}
By definition
$$DP_{\Id}(X)=\lim_{t{\rightarrow}0}\frac{P(tX)-P(\Id)}{t}.$$
\noindent Note that
$P:\Diff^{\infty}(M){\rightarrow}\Diff^{\infty}(M)^k$ is defined
component by component, i.e. $P=(P_1,{\ldots}P_k)$ where
$P_i:\Diff^{\infty}(M){\rightarrow}\Diff^{\infty}(M)$ is given by
$P_i(\phi)=\phi\pi(\gamma_i)\phi{\inv}$. It therefore suffices to
compute $DP_i$. We have
$$DP_i({\Id})(X)=\lim_{t{\rightarrow}0}\frac{P_i(tX)-P_i(\Id)}{t}$$
$$=\lim_{t{\rightarrow}0}\frac{\Exp(tX)\pi(\gamma_i)\Exp(tX){\inv}-\pi(\gamma_i)}{t}.$$
Since we identify $T(\Diff^{\infty}(M))_{\pi(\gamma)}$ with
$\Vect^{\infty}(M)$ by multiplying on the right by
$\pi(\gamma){\inv}$ this is equivalent to
$$DP_i({\Id})(X)=\lim_{t{\rightarrow}0}\frac{\Exp(tX)\pi(\gamma_i)\Exp(tX){\inv}\pi(\gamma_i){\inv}}{t}$$
$$=\lim_{t{\rightarrow}0}\frac{\Exp(tX)\Exp(td\pi(\gamma_i)X){\inv}}{t}$$
$$=\lim_{t{\rightarrow}0}\frac{\Exp(tX-td\pi(\gamma_i)X)}{t}$$
by where the second equality follows from Lemma
\ref{lemma:notaffine} and the third from Lemma
\ref{lemma:quadraticerror}. The limit is equal to
$X-d\pi(\gamma_i)X$ by Lemma \ref{lemma:differencequotient}. This
proves equation $(\ref{equation:d1})$.
The manipulation to prove equation $(\ref{equation:d2})$ is
similar so we only sketch it. As before, we can write
$Q=(Q_1,{\ldots},Q_r)$ where
$Q_i(\phi_1,\ldots,\phi_k)=w_i(\phi_1,\ldots,\phi_k)$. This
yields that
$$DQ_i({\pi})(X_1,\ldots,X_k)=$$
$$\lim_{t{\rightarrow}0}\frac{Q_i(\Exp(tX_1)\pi(\gamma_1),\ldots
,\Exp(tX_k)\pi(\gamma_k))-Q_i(\pi(\gamma_1),\ldots,\pi(\gamma_k))}{t}$$
$$=\lim_{t{\rightarrow}0}\frac{w_i(\Exp(tX_1)\pi(\gamma_1),\ldots
,\Exp(tX_k)\pi(\gamma_k))-w_i(\pi(\gamma_1),\ldots,\pi(\gamma_k))}{t}$$
$$=\lim_{t{\rightarrow}0}\frac{w_i\big(\Exp(tX_1)\pi(\gamma_1),\ldots
,\Exp(tX_k)\pi(\gamma_k)\big)w_i\big(\pi(\gamma_1),\ldots
,\pi(\gamma_k)\big){\inv}}{t}.$$ All further manipulations to
complete the proof use manipulations similar to those above using
Lemmas \ref{lemma:quadraticerror}, \ref{lemma:differencequotient}
and \ref{lemma:notaffine}. The proof can also be given in the
formalism of the so-called Fox calculus. \qed
\noindent Combining Proposition \ref{proposition:cohomology} and
Proposition \ref{proposition:fromhamilton}, we have the following:
\begin{theorem}
\label{theorem:affine} Let $\Gamma$ be a finitely presented group,
$M$ a compact manifold, and $\pi:\G{\rightarrow}\Diff^{\infty}(M)$
a homomorphism. If $H^1(\G,\Vect^{\infty}(M))=0$ and the sequence
$${\xymatrix{
{C^0(\G,\Vect^{\infty}(M))}\ar[r]^{d_0}&{C^1(\G,\Vect^{\infty}(M))}\ar[r]^{d_1}&{C^2(\G,\Vect^{\infty}(M))}&\\
}}$$ \noindent admits a tame splitting, then the homomorphism
$\pi$ is locally rigid.
\end{theorem}
\smallskip \noindent {\bf Remarks on further applications:}
In the context of Theorem \ref{theorem:affine}, the general
question of producing splittings seems difficult. However, if $\G$
is a cocompact lattice in a semisimple Lie group with no compact
factors and $\G$ has property $(T)$ of Kazhdan, the action of $\G$
on $M$ is affine and $M$ is parallelizable, then it seems likely
that one will be able to produce a tame splitting of the sequence
in Theorem \ref{theorem:affine} using estimates on Laplacians
produce by variants and generalizations of the Bochner method as
in \cite{FH}. This yields results similar to those in \cite{FM3}
for actions of cocompact lattices in $SP(1,n)$ and $F_4^{-20}$ as
well as the (cocompact) higher rank lattices considered in
\cite{FM3}. This is work in progress, joint with T.Hitchman. To
prove theorems as in \cite{FM3} for non-cocompact lattices by
these methods seems quite difficult, except perhaps when the
lattices have $\Qa$ rank $1$.
\subsection{Reductions for the proof of Theorem \ref{theorem:implicit}}
\label{subsection:theorem1.1}
In this section, we reduce the proof Theorem
\ref{theorem:implicit} from Theorem \ref{theorem:affine} to
certain estimates that we prove in section
\ref{section:estimates}. For this purpose, we now assume $g$ is a
Reimannian metric on $M$ and that
$\pi:\G{\rightarrow}\Isom(M,g){\subset}\Diff^{\infty}(M)$.
Since we are assuming that $H^1(\G,\Vect^{\infty}(M))=0$, so that
the map $d_0:{C^0(\G,\Vect^{\infty}(M))}{\rightarrow}{\Ker(d_1)}$ is
surjective. To define a splitting we write $\Vect^{\infty}(M)$ as a
Hilbertian direct sum of $\pi(\G)$ invariant finite dimensional
subspaces $V_j$. To do this, we apply the Peter-Weyl theorem to the
closure $K$ of $\pi(\G)$ in $\Isom(M,g)$ which is compact. The
representation of $K$ on $\Vect_{2}(M)$ or $\Vect_{2,k} (M)$ is
therefore a Hilbertian sum of irreducible $K$ (and therefore $\G$)
modules. Since the action of $\Isom(M,g)$ commutes with the
Riemannian Laplacian $\Delta$ on $M$, each $V_j$ is contained in an
eigenspace for the Laplacian, with eigenvalue $\lambda_j$. By
standard elliptic theory, this implies that each $V_j$ consists of
smooth functions and that the splitting into $V_j$'s does not depend
on $k$. We split the complex in Theorem \ref{theorem:affine} by
splitting each complex in the sequence of complexes:
\begin{equation} \label{equation:splittingfdsequence}
{\xymatrix{
{V_j}\ar[r]^{d_0}&{V_j^k}\ar[r]^{d_1}&{V_j^r}&\\
}}
\end{equation}
\noindent For each $V_j$ we let $d_i^j$ be the restriction of $d_i$
to $V_j$. We define $(D_1^j)$ to be orthogonal projection onto
$\im(d_0^j)=\ker(d^j_1)$ followed by $(d_0^j){\inv}$, the inverse of
$d_0^j$ viewed as an isomorphism from $\ker(d_0^j)^{\perp}$ to
$\im(d_0^j)=\ker(d^j_2)$. Similarly, we define $(D_2^j)$ to be
orthogonal projection onto $\im(d_1^j)$ followed by $(d_1^j){\inv}$,
the inverse of $d_1^j$ viewed as an isomorphism from
$\ker(d_1^j)^{\perp}$ to $\im(d_1^j)$. This gives a splitting of
equation (\ref{equation:splittingfdsequence}). To split the sequence
in Theorem \ref{theorem:affine}, we let $D_i(\sum_j v_j)= \sum_j
(D_i^j)(v_j)$ . At the moment this map is only a formal splitting,
since it is not clear that if $\sum_J v_j$ is smooth that
$D_i(\sum_j v_j)$ is smooth or even in $L^2$. To verify that the
maps $D_i$ maps smooth cochains to smooth cochains and that the
splitting is tame, we require upper bounds on $(D_i^j)$. Or
equivalently, lower bounds on $d_i^j$. The maps $D_i$ we have
defined are well-defined without estimates on elements that project
non-trivially into only finitely many eigenspaces.
We will work with the Sobolev norms $\|{\cdot}\|_{2,k}$ on
$\Vect^{\infty}(M)$. Since the $C^k$ norms are tamely equivalent to
the Sobolev norms this is equivalent to considering the $C^k$ norms.
A key point is that each $V_j$ is contained in an eigenspace for
some eigenvalue $\lambda_j$ of the Laplacian on vector fields.
Letting $I$ be the identity on $\Vect^{\infty}(M)$, we have:
\begin{equation}
\label{equation:normsoneigenspaces}
\|v\|_{2,k}=\|(I+\Delta)^{k/2}v\|_{2,0}=(1+\lambda_j)^{k/2}\|v\|_{2,0}
\end{equation}
\noindent for every $v{\in}V_j$. The first equality is an
equivalent definition for the Sobolev norm, and the second equality
is the fact that $v{\in}V_j$.
We now give a sufficient criterion for tame splitting.
\begin{proposition}
\label{proposition:lowerbounds} For $D_1$ and $D_2$ to define a
tame splitting of the sequence in Theorem \ref{theorem:affine}, it
suffices to find $\epsilon>0$ and $\alpha$ a positive integer such
that
$$\|d_i^j(v)\|\geq \frac{\epsilon}{\lambda_j^{\alpha}}\|v\|$$
\noindent on $\ker(d_i^j)^{\perp}$ for $i=1,2$.
\end{proposition}
\noindent{\bf Proof} The fact that $d_0 \circ D_1 + D_2 {\circ} d_1
=\Id$ formally is immediate from the definitions, so all that
remains is to check that $D_1$ and $D_2$ map smooth chains to smooth
chains and are tame. If $v$ is a smooth $i$-chain and $v=\sum v_j$
with $v_j{\in}V_j$ then, up to absolute constants, the Sobolev norm
$\|v\|_{2,n}$ is $\sum \lambda_j^n\|v_j\|_2$ for every $n$. We can
view $D_i$ as a composition of a projection $\pi_i$ and the map
$d_{i-1}{\inv}=\sum (d_{i-1}^j){\inv}$ which is defined on the image
of the projection. The projections are tame by definition, so we
restrict to chains in the image of the projections. For such a
chain $v=\sum_j v_j$ we have $d_i^j((d_i^j){\inv}(v_j))=v_j$ and
substituting $w_j=(d_i^j){\inv}(v_j)$ into the hypotheses on
$d_i^j$, we see our hypothesis is equivalent to:
$$\|(d_i^j){\inv}(v)\|_2{\leq}\frac{\lambda^{\alpha}}{\epsilon}\|v\|_2.$$
\noindent Combined with our earlier observation on Sobolev norms,
this implies that
$$\|d_i{\inv}(v)\|_{2,n} \leq C\sum_j \lambda^n \|(d_i^j){\inv}(v)\|_2
\leq \frac{C'}{\epsilon}\sum_j \lambda^{n+\alpha} \|v\| \leq C''
\|v\|_{2, n + \alpha}$$
\noindent where $C''$ depends only on $\epsilon$. \qed
\smallskip \noindent {\bf Remarks on finite regularity and relation to KAM method:} One can write down a sequence like that in line
$(\ref{equation:initialsequence})$ with $\Diff^l(M)$ in place of
$\Diff^{\infty}(M)$, but in that context the maps $P$ and $Q$ will
not be smooth, and so no implicit function theorem will apply.
Lack of smoothness will derive from the fact that for any smooth
structure on $\Diff^l(M)$, only one of right and left
multiplication is smooth, see \cite[Examples I.4.4.5 and
I.4.4.6]{Ha}. This makes this approach seem unlikely to yield
results in finite regularity, in particular the finite regularity
version of Theorem \ref{theorem:isomrigid} from \cite{FM2} seem
hard to prove by this method.
It should be possible to prove a finite regularity version of
Theorem \ref{theorem:implicit} by replacing the use of the
implicit function theorem with an explicit $KAM$-type iteration.
It is well known that the KAM iteration method and the implicit
function theorem are intimately connected. Even if successful,
this method seems unlikely to yield as sharp a finite regularity
version of Theorem \ref{theorem:isomrigid} as obtained in
\cite{FM2}. For related applications of the $KAM$ method, see
\cite{DK,DoKr}.
It is also possible to work strictly in finite regularity and
prove cohomological criteria for local rigidity, see \cite{AN,Fl}.
The difficulty with these results is that the resulting
cohomological conditions are more difficult to verify and in
particular do not yield a finite regularity analogue of Theorem
\ref{theorem:implicit}.
\section{Tame estimates for the proof of Theorem
\ref{theorem:implicit}} \label{section:estimates}
In this section we complete the proof of Theorem
\ref{theorem:implicit} by veryifying the hypotheses of Proposition
\ref{proposition:lowerbounds}. In the first subsection, we see that
$H^1(\Gamma,\Vect^{\infty}(M))=0$ has strong implications for the
group $\pi(\Gamma)$. These constraints are used in the second and
third subsections respectively to prove the desired bounds on $d_0$
and $d_1$ respectively.
\subsection{Structure of actions with $H^1(\Gamma,
\Vect^{\infty}(M))=0$}
Given $\pi:\Gamma{\rightarrow}\Isom(M)$, we let $K =
\overline{\pi(\Gamma)}$. The group $K$ is a compact group with
connected component $K^0$, so $K/K^0$ is finite. There is a largest
finite index normal subgroup $\Gamma^0<\Gamma$ with
$\pi(\Gamma^0)<K^0$. We denote by $\mathfrak k$ the lie algebra of
$K^0$ and note $\mathfrak k \subset \Vect^{\infty}(M)$ simply by
differentiating the $K^0$ action. It is also clear that $\mathfrak
k$ is a $\pi(\Gamma)$ invariant subspace of $\Vect^{\infty}(M)$. The
$\pi(\Gamma)$ action on $\mathfrak k$ is the restriction of the
adjoint action. The following proposition follows from the fact
that $H^1(\Gamma, \mathfrak k)=0$ and known results concerning group
homomorphisms and compact groups. We use the convention that a
connected compact group is called {\em semisimple} if it has no
circle factors.
\begin{proposition}
\label{proposition:structure} Under the hypotheses of Theorem
\ref{theorem:implicit}, $K_0$ is semisimple. Furthermore, there is a
$\Qa$ structure on $K_0$ such that:
\begin{enumerate}
\item all representations of $K_0$ are defined over $\Qa$ and,
\item possibly after conjugating the action by an element of
$K_0$, there is a finite algebraic extension $k$ of $\Qa$ such
that all elements of $\pi(\Gamma){\cap}K_0$ have entries in $k$.
\end{enumerate}
\end{proposition}
\noindent{\bf Proof:} We first show semisimplicity of $K^0$
assuming $K=K^0$. We assume $K^0$ is not semisimple and derive a
contradiction. If $K^0$ is not semisimple, then $K^0=K^0_sT$ where
$K^0_s$ is semisimple, $T$ is a torus, $T{\cap}K^0_s=D$ a finite
abelian group, and the product is an almost direct product. The
$K^0_s$ invariant vector fields on $M$ clearly include a copy of
$\mathfrak t$ on which the $\Gamma$ action is trivial. This set is
non-empty since it contains the Lie algebra $\mathfrak t$. The
homomorphism $\pi:\Gamma{\rightarrow}K^0_S$ clearly yields a
homomorphism $\pi:\Gamma{\rightarrow}T/D$ with dense image. This
homomorphism clearly admits deformations which implies that
$H^1(\Gamma,\mathfrak t)\neq0$, a contradiction.
Let $F=K/K^0$. There is a natural $F$ action on $K^0$ orbits in
$M$ and denote the kernel of this action by $F'$. Elements of $K$
that project to $F'$ map each $K^0$ orbit in $M$ to itself.
Assuming $F=F'$, we can repeat almost the same argument as above.
Here $F'$ acts on the torus $T$ by automorphisms and hence by
permutations of the coordinates. Taking an appropriate diagonal
subgroup of $\Delta<T$, we have a trivial $F'$ action. We can now
project $\pi(\Gamma)$ to $\Delta/{\Delta{\cap}D}$ and argue as
above.
In the general case $F' \neq F$, we reduce back to the previous
setting. Let $\Gamma'$ be the inverse image of $K'$ in $\Gamma$. We
recall that there is a set U whose complements consists of closed
submanifolds of codimension at least 1 which is diffeomorphic to
$K^0/C{\times}S$ on which the $K^0$ action is by left translations.
In particular, we can choose a open, $K'$ invariant subset $U'$
containing at least $\frac{1}{2}$ the measure of $M$. Arguing as in
the last paragraph, on the subset of $V=\Vect_2(M)$ supported on
$U'$, we can find a non-zero class represented by smooth vector
fields in $H^1(\Gamma',V)$. The $\Gamma$ (resp. $K$)
representation on $\Vect(M)$ contains the representation $\pi_I$
induced from the $\Gamma'$ (resp. $K'$) representation on $V$. Since
we are dealing with compact groups, we can write the representation
on $\Vect_2(M)$ as direct sum of $\pi_I$ and a complementary
representation. This implies that the cocycle defined above gives
rise to a non-vanishing cocycle in $H^1(\Gamma,\Vect_2(M))$. It is
straightforward from the definition of induction that the cocycle is
represented by smooth vector fields.
That there is a $\Qa$ form of $K_0$ for which all representations
are defined over $\Qa$ is in \cite[Section 3]{Rg2} and a more
general result can be found as \cite[Theorem 1.2]{M1}. It follows
easily that we can represent $K$ as a $\Qa$ group.
By hypothesis $H^1(\Gamma,\mathfrak k)=0$ and so $\Gamma$ is
locally rigid in $K$ by the main result of \cite{We3}. The fact
that after conjugating by an element of $K^0$, the elements of
$\pi(\Gamma)$ have entries in a number field $k$ then follows from
\cite[Proposition 6.6]{Rg}. We remark that to see that the entries
of $\pi(\Gamma^0)$ are defined over a number field $k$, one can
avoid checking $K$ is defined over $\Qa$ by noting that the proof
of \cite[Proposition 6.6]{Rg} only uses the algebraic structure on
the connected component of $K$. This suffices for our
purposes.\qed
\begin{corollary}
\label{corollary:diophantine} Under the hypotheses of Theorem
\ref{theorem:implicit} for any $r>0$ there exists $\alpha \geq 0$
and $\epsilon
>0$ such that for any $\gamma{\in}B_{\Gamma}(e, r)$ we have
$$\|v-d\pi(\gamma)v\|\geq
\frac{\epsilon}{\lambda^{\alpha}}\|v\|$$
\noindent for any $v{\in}V_{\lambda}$ orthogonal to the set where
$w=\pi(\gamma)w$.
\end{corollary}
\noindent{\bf Proof:} We begin by assuming that $\pi(\gamma)$ is in
connected component of $\overline{\pi(\Gamma)}$ and end by reducing
to this case.
Since $\gamma{\in}K_0$, a connected semisimple compact group,
$\gamma$ is contained in some maximal torus $T<K_0$. We recall
that Proposition \ref{proposition:structure} implies that all
$K_0$ representations are rational and that $\gamma$ has entries
in a number field $k$. The action of $\gamma$ on $\Vect_2(M)$ is
the restriction of the $K_0$ action and/or the $T$ action. We
briefly recall a few facts from the representation theory of
compact groups. First, since $T$ is abelian, the irreducible
representations of $T$ are the usual complex dimension one
representations which can identified with $\Za^d$. The subset of
these representations that occur in a $K_0$ representation are
exactly those that are invariant under the induced action of Weyl
group of $K_0$ on $\Za^d$, see e.g \cite[Proposition VI.2.1]{BD}.
This subset is a sublattice $\Za^k$ in $\Za^d$. We can decompose
the action of $T$ in any representation of $K_0$ into $2$
dimensional subspaces where ${\bf t}{\cdot}v_{\bf l}=exp(\iota
2\pi<{\bf t},{\bf l}>)v_{\bf l}$ where ${\bf l}{\in}\Za^k$ and the
inner product is defined by taking any element of $\Ra^d$ covering
$t$ in $\Ta^d$ in the usual covering map. In particular, the same
formula holds for ${\bf t}={\bf \pi(\gamma)}$.
To determine the behavior of $\pi(\gamma)$ we choose a basis ${\bf
e_1, \cdots, e_k}$ for $\Za^k$. Note that the basis vectors occur
in some finite collection of representations of $K_0$ all of which
are rational by Proposition \ref{proposition:structure}. Since
$\pi(\gamma)$ can be chosen to lie in some number field $k$, the
dimension of the $K_0$ representations containing $v_{e_i}$ is
bounded and $exp(\iota 2\pi\langle{\bf t},{\bf e_i}\rangle)$ is an
eigenvalue in that representation, it follows that each
$\alpha_i=exp(\iota 2\pi\langle{\bf \pi(\gamma)},{\bf
e_i}\rangle)$ lies in a number field $L$. Writing
$\theta_i=(\langle {\bf \pi(\gamma)},{\bf e_i}\rangle)$ we see
that $\theta_i= \frac{\log(\alpha_i)}{\log(-1)}$ where $\alpha_i$
and $-1$ are both algebraic. Results on linear forms in
logarithms, see e.g. \cite[Theorem 3.1]{Bak} and discussion
following, imply that each $\theta_i$ is diophantine. We prove the
desired estimate using a variant of the same theorem. We have that
$$\|d\pi(\gamma)v_{\bf l}-v_{\bf l}\|=|exp(\iota
2\pi\langle{\bf \pi(\gamma)},{\bf l}\rangle)-1|\|v_{\bf l}\|$$
\noindent so it suffices to bound $|exp(\iota 2\pi\langle{\bf
\pi(\gamma)},{\bf l}\rangle)-1|$ below by $\frac{C}{\|k\|^d}$.
This is equivalent to bounding the term $(\iota 2\pi\langle{\bf
\pi(\gamma)},{\bf l}\rangle)$ away from integers by
$\frac{C'}{\|k\|^d}$. This follows as
$$|p-\sum_i 2 l_i
\frac{\log(\alpha_i)}{\log(-1)}|>C|p \log(-1) - \sum_i 2 l_i
\log(\alpha_i)|>\frac{C'}{\|{\bf l}\|^d}$$
\noindent unless the left hand side is zero, in which case $v_l$
is $\pi(\gamma)$ invariant by definition. Here we are using
\cite[Theorem 3.1]{Bak} and the discussion on the following page
to bound the linear form in logarithms below by
$\exp(-\log(h){\cdot}C)$ where $h$ is a bound on the heights of
the set $p,l_1, \ldots, l_k$. The height of an integer is just
it's absolute value and for the quantity on the left hand side to
be small, the absolute value of $p$ can be no more than a constant
times the maximal height of the $l_i$'s.
It remains to show a relationship between ${\bf l}$ and $\lambda$.
Let $w{\in}{\mathfrak k_0}$ be the vector field on $K_0$ such that
$\exp(w)=\pi(\gamma)$. Using the $K_0$ action on $M$ and viewing
$v_{\bf l}$ as a vector field on $M$, we can differentiate $v_{\bf
l}$ by $w$ and note that $\|w{\cdot}v_{\bf l}\|= \prod_i |l_i|
\|v_{\bf l}\|$ point-wise. This then implies that the $C^1$ norm
of $v_{\bf l}$ is bounded below by $\|{\bf l}\|$ times the $C^0$
norm of $v_{\bf l}$. This and the definition of Sobolev norm imply
that:
$$\|{\bf l}\|\|v_{\bf l}\|_2\leq\|{\bf l}\|\|v_{\bf l}\|_{C^0}{\leq}
\|v_{\bf l}\|_{C^1}\leq C(1+\lambda)^\frac{dim(M)}{2}\|v_{\bf
l}\|_2.$$
\noindent Simplifying, we see that $\|{\bf l}\|\leq C \lambda ^{d'}$
where $d'=\frac{\dim(M)}{2}$ where $C$ is a constant depending only
on $M$. Therefore we have
\begin{equation}
\label{equation:diophantine} \|d\pi(\gamma)v_{\bf l}-v_{\bf
l}\|{\geq}\frac{\epsilon}{\lambda^\alpha}
\end{equation}
\noindent for some choice of $\epsilon$ and $\alpha$ which depend
on $L,M$ and $\gamma$.
The conclusion of the theorem follows for $K=K_0$, since we
consider only finitely many elements of $\Gamma$. To prove the
general case, we note that there is a number $n$ such that for any
$\gamma{\in}\Gamma$ we have $\pi(\gamma^n){\in}K_0$. This implies
equation $(\ref{equation:diophantine})$ holds up to replacing
$\epsilon$ by $\frac{\epsilon}{n}$ on any subspace where $K_0$
acts non-trivially. Since $K/K^0$ is finite, a stronger bound
holds trivially on $K_0$ invariant vectors, with $\alpha=0$ and
$\epsilon$ depending only on $S$ and $F$. \qed
\subsection{Lower bounds on $d_0$}
\label{subsection:dp}
We remark that the required lower bound on $d_0$ are an immediate
consequence of Corollary \ref{corollary:diophantine}. In this
subsection, we explain how to adapt the ideas in \cite{Do} to prove
lower bounds for $d_0$ without using the results on linear forms in
logarithms referred to in the proof of Corollary
\ref{corollary:diophantine}. The bound obtained in this way is both
easier and involves a much smaller loss of regularity. Let $K$ be a
compact group whose connected component is semi-simple and $\G$ a
finitely generated dense subgroup of $K$. Assume that $K$ acts on a
compact manifold $M$. The action preserves some metric $g$ and
therefore commutes with the Laplacian $\Delta$ defined by $g$. As
above, we can write the space of $L^2$ vector fields on $M$ as a
Hilbertian direct sum of subspaces $V_j$ where each $V_j$ is an
irreducible $K$-module and is contained in an eigenspace for
$\Delta$ with eigenvalue $\lambda_j$. We then have the following,
which is a generalization and sharpening of results in
\cite[Appendix A]{Do}.
\begin{theorem}
\label{theorem:dolgopyatplus} There exist $\varepsilon_0>0$ and a
non-negative integer $\alpha$ such that for each $V_j$ which is a
non-trivial $\G$-module and every $v_j{\in}V_j$ there is
$\g{\in}S$ such that
$$\|v-d\pi(\g)v\|_2{\geq}\varepsilon_0(\log(1+\lambda_j))^{-4}\|v\|_2.$$
\end{theorem}
\noindent{\bf Remarks:} \begin{enumerate} \item This says implies
that $d_0{\inv}$ is tame whenever the closure of $\G$ in
$\Isom(M,g)$ has semisimple connected component. In fact, it
implies the estimate on $d_0$ for Proposition
\ref{proposition:lowerbounds} with $\alpha=1$.
\item In \cite{Do}, this result is proven with for the action on
$L^(M)$ when $M=K/C$ with $\lambda^{-\alpha}$ in place of
$(\log(1+\lambda_j))^{-4}$. The improvement in estimate here comes
primarily from using the results from \cite[Appendix 3]{CN} in
place of those in \cite{Do}. It is actually possible to replace
the $-4$ in the exponent above with something closer to $-2$. This
is unimportant to our applications. See below for further
discussion.
\end{enumerate}
\noindent Before proving the theorem, we recall a fact from
\cite[Appendix 3]{CN}. Since we have fixed a finite generating
set $S$ for $\G$, we can look at sets $B(\G,n)$ which consist of
elements of $\G$ that can be written as words of length less than
$n$ in $S$.
\begin{proposition}
\label{proposition:fromdima} There exists a constant $C$ such
that for any $\varepsilon>0$ and any $k{\in}K$ the set of points
$B(\G,n)k$ form an $\varepsilon$-net in $K$ where
$n=\varepsilon_0(\log(\frac{1}{\varepsilon}))^4$.
\end{proposition}
\noindent{\bf Remarks:}\begin{enumerate} \item In \cite{CN}, this
result is only stated for the case $K=SU(2)$, though it is later
remarked that it holds for $SU(n)$ as well. The fact that it
follows from the same proof for all $K$ we are considering here was
first observed by Michael Larsen (personal communication). \item A
version of this result with $n=\frac{C}{\varepsilon^{\beta}}$ for
constants $C$ and $\beta$ is proven \cite{Do} and used there to
prove a version of Theorem \ref{theorem:dolgopyatplus} for the $\G$
action on $L^2(M)$.
\end{enumerate}
\noindent {\bf Proof of Theorem \ref{theorem:dolgopyatplus}.} For
now, assume the closure $K$ of $\G$ in $\Isom(M,g)$ is connected.
The fact that $K$ is compact implies that for any unitary
representation of $K$ on a Hilbert space $\fh$ which does not
contain the trivial representation and any unit norm vector
$v{\in}\fh$, there is $k{\in}K$ such that
$\|v-d\pi(k)v\|>\frac{1}{2}$. Let $V_j$ be an irreducible,
non-trivial $K$ submodule of $\Vect_2(M)$. Then for any unit vector
$v{\in}V_j$ there is $k{\in}K$ such that $\|v-
d\pi(k)v\|_2>\frac{1}{2}$. We note that the Sobolev embedding
theorems imply that $v-d\tilde \pi(k)v$ is a smooth section of
$K{\times}TM{\rightarrow}K{\times}M$ with all first derivatives
controlled by a constant $D$ times $(1+\lambda_j)^{\alpha}$ where
$\alpha=\frac{\dim(M)+4}{4}$. Letting
$\varepsilon=\frac{1}{4D(1+\lambda_j)^{\alpha}}$ and
choosing $n$ in Proposition \ref{proposition:fromdima} such that
$n=C(\log(\frac{1}{\varepsilon}))^4$, we can find
$\gamma{\in}B(\G,n)$ such that $|v-d\pi(\g)v|_2>\frac{1}{4}$.
Writing $\g=\g_1{\ldots}\g_{l}$ where each $\g_i{\in}S$ and
$l{\leq}n$, this implies that
$$\sum_{i=1}^l|d\pi(\g_{1}{\ldots}\g_{i-1})v-d\pi(\g_1\ldots\g_i)v|_2>\frac{1}{4}.$$
Since $d\pi$ is unitary, there is an $i$ such that
$\|v-d\pi(\g_i){\inv}v\|_2\geq\frac{1}{4n}$. By our choice of $n$,
it follows that $$\|v-d\pi(\g_i)v
\|\geq\varepsilon_0(\log(1+\lambda_j))^{-4}.$$
To complete the proof if $K$ is not connected, we let $\G'<\G$ be
$K^0{\cap}\G$, which is clearly a normal subgroup of finite index.
We then choose generators $S$ of $\G$ which contain a set $S'$ of
generators of $\G'$. The argument above yields the conclusion of
the theorem except on those $V_j$ which are trivial $K_0$ modules
but not trivial $K$ modules. For these $V_j$, the $\G$ action is
actually an action of the finite group $\G/\G'$ and so the existence
of a gap, independent of $\lambda$, is trivial. \qed
\subsection{Lower bounds on $d_1$}
\label{subsection:dq}
In this subsection we prove:
\begin{theorem}
\label{theorem:lowerdq} There exist $\varepsilon_0>0$ and a
non-negative integer $\alpha$ such that for each $V_j$ which is a
non-trivial $\G$-module and every $v{\in}V^s_j$ such that $v \perp
\ker(d_1)$ we have
$$\|d_1(v)\|_2{\geq}\frac{\varepsilon_0}{\lambda_j^{\alpha}}\|v\|_2.$$
\end{theorem}
This is considerably more involved than Theorem
\ref{theorem:dolgopyatplus}. We begin by noting that
$d_1:V^s{\rightarrow}V^r$ and write $d_1^i:V^s{\rightarrow}V$ for
the components. We actually prove the stronger statement that
$$\|d_1^i(v)\|_2{\geq}\frac{\varepsilon_0}{\lambda_j^{\alpha}}\|v\|_2$$
\noindent for any $v \perp \ker(d_1^i)$. We write $d_1^i(v_1,\ldots,
v_s)=\sum_{k=1}^s\sum_j d\pi(\gamma_{j,k}^i)v_k$ where
$\gamma_{j,k}^i$ are sub-words of the $i$-th relator of $\Gamma$. To
simplify notation, we suppress the $1$ for the remainder of this
subsection and write $d^i=d_1^i$. We further rewrite each $d^i$ as
the composition of two operators $\hat d^i:V^s{\rightarrow}V^s$
defined by $\hat d^i=(d^i_j)$ where $d^i_j(v)=\sum_j
d\pi(\gamma_{j,k}^i)v$ and $d^i=S{\circ}d^i_j$ where
$S:V^s{\rightarrow}V$ is just the sum of the coordinates.
Note that $S$ is uniformly bounded below on $\ker(S)^{\perp}$ and
that $\ker(d^i)= \ker(\hat d^i) \oplus (\hat d^i){\inv}(\ker(S))$.
Letting $W=\hat d^i{\inv}(\ker(S)^{\perp})$ it suffices to bound
$\hat d^i$ below on $W$. This can be done by bounding below each
$d^i_j$ on $W$. Summarizing, to prove Theorem
\ref{theorem:lowerdq} it suffices to prove the following:
\begin{proposition}
\label{proposition:reducedbound} There exist $\epsilon>0$ and
$\alpha>0$ such that:
\begin{equation}
\label{equation:sufficientdq}
\|d^i_j(v)\|\geq\frac{\epsilon}{\lambda_j^{\alpha}}\|v\|
\end{equation}
for any $v{\in}V_j$ not in $\ker(d^i_j)$.
\end{proposition}
\noindent Since $\Gamma$ is finitely presented, there are only
finitely many $d^i_j$, and one not need to worry if $\epsilon$ and
$\alpha$ depend on $i$ and $j$ or not. The proof of equation
(\ref{equation:sufficientdq}) is a study of averaging operators of
the form $Av = \sum_j \pi(\gamma_{j})v$. Clearly the $d^i_j$ have
this form. The principal difficulty here is that we are making no
assumption on the group $\Gamma_A$ generated by the $\gamma_j$
associated to a particular $A$. Since we cannot use any group
property, instead we control $A$ by controlling the individual
$d\pi(\gamma_j)$, or more precisely, elements of the form
$d\pi(\gamma_i{\inv}\gamma_{j})$. Given Corollary
\ref{corollary:diophantine}, to prove Proposition
\ref{proposition:reducedbound}, it suffices to prove:
\begin{proposition}
\label{proposition:averagingoperatorbounds} Let $V$ be a Hilbert
space, $\Gamma$ a group acting by unitary transformations on $V$
and and $Av = \sum_{j=1}^n d\pi(\gamma_{j})v$. If we know that
$\|v-d\pi(\gamma_i{\inv}\gamma_j)v\|\geq\eta\|v\|$ whenever $v$ is
perpendicular to the subspace where
$w=d\pi(\gamma_i{\inv}\gamma_j)w$ then $\|Av\|\geq\eta\|v\|$ on
$\ker(A)^{\perp}$.
\end{proposition}
The proof depends on the following easy lemma. Given an operator
$A$ on a Hilbert space $V$, we let $V_1^A$ be the subspace of $V$ on
which $A$ is the identity.
\begin{lemma}
\label{lemma:inductionstep} Let $A,A'$ be operators on a Hilbert
space $V$ such that $A=v-A'v$. Assume that on
$(\ker(A'){\oplus}V^{A'}_1)^{\perp}$, we have
$\|v-A'v\|\geq\eta\|v\|$ and $\|A'v\|\geq\eta\|v\|$ then
$\|v-Av\|\geq\eta\|v\|$ and $\|Av\|\geq\eta\|v\|$ on
$(\ker(A){\oplus}V^{A}_1)^{\perp}$.
\end{lemma}
\noindent{\bf Proof:} By definition $\ker(A')=V^A_1$ and
$\ker(A)=V^{A'}_1$ so it suffices to work on the complement to the
sum of these two subspaces. The proposition then follows from the
equalities $Av=v-A'v$ and $v-Av=A'v$. \qed
\noindent{\bf Proof of Proposition
\ref{proposition:averagingoperatorbounds}} We formally add the
operator $-1$ to the set of elements $d\pi(\gamma_j)$. Since $-1$
clearly satisfies the required bound, this does not effect our
assumptions.
The proof is an induction on $n$. We first note that the desired
bound for $A$ is equivalent to the same bound for
$d\pi(\gamma_1{\inv}){\circ}A=v-A'v$ where $A'$ is an operator in
the same form as $A$ but with one less term in the sum. The
induction step is now Lemma \ref{lemma:inductionstep} and the base
case is exactly the hypothesis that
$\|v-d\pi(\gamma_i{\inv}\gamma_j)v\|\geq\eta\|v\|$. \qed
\section{Proving $H^1(\G,\Vect^{\infty}(M))=0$}
\label{section:applications}
To obtain applications of Theorem \ref{theorem:implicit}, I
introduce a criterion for the vanishing of
$H^1(\G,\Vect^{\infty}(M))$. This criterion may be viewed as a
sharpening of \cite[Theorem 4.1]{LZ} and the proof is similar.
We recall from Section \ref{subsection:theorem1.1}, that the space
of vector fields splits as a Hilbertian direct sum
$\oplus_{j=1}^{\infty}V_j$ where each $V_j$ is a finite dimensional,
irreducible $\G$-module, and contained in an eigenspace for the
Laplacian. Let $\lambda_j$ be the eigenvalue for the eigenspace
containing $V_j$. This is a Hilbertian direct sum in either the
$L^2$ topology or the Sobolev $W^{2,k}$ topology for any value of
$k$. Since each $V_j$ is contained in an eigenspace for the
Laplacian on vector fields and consists of $C^{\infty}$ vector
fields. Fix a finite generating set $S$ for $\Gamma$ and let
$\|{\cdot}\|_2$ denote the $L^2$ norm on vector fields on $M$.
\begin{criterion}
\label{criterion:vanishing} Let $\G$ be a finitely generated
group, $(M,g)$ a compact Riemannian manifold and
$\pi:\G{\rightarrow}\Isom(M,g)$ a homomorphism. Then the
following are sufficient conditions for vanishing of
$H^1(\G,\Vect^{\infty}(M))$
\begin{enumerate}
\item $H^1(\G,V_j)$ vanishes for every $j$ and \item there exist
$\varepsilon>0$ and a non-negative integer $\alpha$ such that for
each $V_j$ which is a non-trivial $\G$-module and every
$v_j{\in}V_j$ there is $\g{\in}S$ such that
$$\|v_j-d\pi(\g)v_j\|_2{\geq}\varepsilon_0\lambda_j^{-\alpha}\|v_j\|_2.$$
\end{enumerate}
\end{criterion}
\noindent{\bf Remarks:}\begin{enumerate} \item Part $(1)$ of the
criterion provides the existence of a formal solution for any
cohomological equation coming from $H^1(\G,\Vect^{\infty}(M))$.
Part $(2)$ guarantees a smooth solution of a smooth cohomological
equation and even provides tame estimates on the size of the
solution in terms of the size of the equation. \item It is
possible to replace the constant
$\varepsilon_0\lambda_j^{-\alpha}$ in $(2)$ with any sequence
$c_j{\inv}$ such that if $\sum_{j=1}^{\infty}a_j \lambda_j^n$
converges for every integer $n$ then $\sum_{j=1}^{\infty}a_j c_j
\lambda_j^n$ converges for every integer $n$. This is weaker than
what is stated above, but is not relevant to our applications. The
estimate above is analogous to the classical condition of ``small
divisors."
\end{enumerate}
Recall that if we view a vector field on $M$ as a sequence
$\{v_j\}$ where $v_j{\in}V_j$, then smoothness is equivalent to
having the $L^2$ norms $\|v_j\|_2$ decay faster than any
polynomial as a function of $\lambda_j$. So smoothness is
equivalent to $\sum_j{\lambda_j^n}{\|v_j\|_2}<\infty$ for all
positive integers $n$.
\noindent {\bf Proof of Criterion \ref{criterion:vanishing}.}
Given a cocycle $c:\G{\rightarrow}\Vect^{\infty}(M)$, we can write
$c=\{c_j\}$ where $c_j:\G{\rightarrow}V_j$. Since $H^1(\G,V_j)=0$
for every $j$, it follows that there is a vector $v_j$ such that
$c_j(\g)=v_j-\pi_j(\g)v_j$ for each $j$ and every $\g{\in}\G$. We
now need only see that $\{v_j\}$ represents an element of
$\Vect^{\infty}(M)$. First note that if $V_j$ is a trivial $\G$
module, then $c_j=0$ and so we can assume $v_j=0$. If $V_j$ is a
non-trivial $\G$ module then
$\|c_j(\g_i)\|_2=\|v_j-\pi_j(\g)v_j\|_2{\geq}\varepsilon\lambda_j^{-\alpha}\|v_j\|_2$.
Since $\sum\lambda_j^n\|c_j(\g_i)\|$ is finite for all $n$, it
follows that so is $\sum\lambda_j^n\lambda_j^{-\alpha}\|v_j\|_2$
for every $n$, which suffices to show that $v=\{v_j\}$ is smooth
and even to prove a tame estimate
$\|v_j\|_{2,k}{\leq}C_k\|c_j\|_{2,k+\alpha}$. \qed
For all of our applications, Criterion
\ref{criterion:vanishing}$(2)$ can be verified by three methods. The
first is to deduce it directly from Corollary
\ref{corollary:diophantine}. To see that this works, one needs to
note that Proposition \ref{proposition:structure} does not actually
depend on vanishing of $H^1(\Gamma, \Vect^{\infty}(M))$ but only on
vanishing of $H^1(\Gamma,\mathfrak k)$ which is immediate from
$(1)$. The second is to use Theorem \ref{theorem:dolgopyatplus}
above which yields the desired estimate. The third is to use instead
a deep arithmetic result of Clozel, which implies condition $(2)$
with $\alpha=0$, \cite{Cl1}. As noted above, $(2)$ is trivial when
$\G$ is finite and has only has finitely many conjugacy classes of
irreducible representations. We briefly explain how to prove a
version of Criterion \ref{criterion:vanishing}$(2)$ with $\alpha=0$
from work of Clozel in \cite{Cl1}. Clozel's work uses a great deal
of information from the theory of automorphic forms and
representation theory, so though the resulting estimate is stronger,
this approach seems less satisfactory. Theorem \cite[Theorem
3.1]{Cl1} implies that all non-trivial $V_j$ that can arise in this
context are outside some neighborhood of the identity in the Fell
topology on the unitary dual of $\G$. This uses the fact that all
representations of $\G$ which occur, when induced to the group $G$
in which $\G$ is a lattice, occur in the so-called automorphic
spectrum of $G$. The exact statement of Theorem \cite[Theorem
3.1]{Cl1} is that the automorphic spectrum of $G$ is outside some
neighborhood of the trivial representation of $G$ in the unitary
dual. This implies that the $\G$ representations we consider are
outside a neighborhood of the identity by elementary properties of
induction of representations.
We conclude this section by verifying that Criterion
\ref{criterion:vanishing} is satisfied in the context of Theorems
\ref{theorem:isomrigid}, \ref{theorem:irredlattices} and
\ref{theorem:clozel}, completing the proofs of those theorems.
\noindent {\bf Proof of Theorem \ref{theorem:isomrigid}.} As
remarked above, vanishing of $H^1(\G,\Vect^{\infty}(M))$ in the
context of Theorem \ref{theorem:isomrigid} was already observed in
\cite{LZ}. In fact Criterion \ref{criterion:vanishing}$(2)$ with
$\alpha=0$ for all unitary representations of $\G$ is Kazhdan's
original definition of property $(T)$. Criterion
\ref{criterion:vanishing}$(1)$ follows from this by a result of
Guichardet \cite{Gu}. \qed
\noindent {\bf Proof of Theorem \ref{theorem:irredlattices}.}
These lattices satisfy the hypotheses of \cite[Introduction,
Theorem (3)]{Ma}. This implies that the connected component of
$\pi(\G)$ is semisimple and that $H^1(\G,V)=0$ for all finite
dimensional representations of $\G$. Theorem
\ref{theorem:irredlattices} then follows from Theorem
\ref{theorem:implicit}, Criterion \ref{criterion:vanishing} and
Theorem \ref{theorem:dolgopyatplus}. \qed
\noindent {\bf Proof of Theorem \ref{theorem:clozel}.} In this
case we are assuming that the closure of $\pi(K)$ is a semisimple
compact group. It remains to verify condition $(1)$ of Criterion
\ref{criterion:vanishing}. This is a special case of \cite[Theorem
3.2 and 3.5]{Cl1}. \qed
\section{Further results}
\label{section:further}
The purpose of this section is to discuss some further
consequences of the methods presented in this paper. The first
subsection discuss local rigidity results for isometric actions of
more general irreducible lattices in products of locally compact
groups. The second subsection describes variants on Theorems
\ref{theorem:implicit} and \ref{theorem:affine} for maps to tame
Lie groups other than $\Diff^{\infty}(M)$. In the final
subsection, we recall from \cite{Cl1}, the definition of the class
of lattices to which Theorem \ref{theorem:clozel} applies.
\subsection{More general results on irreducible lattices}
\label{subsection:irredlattices}
In this subsection, we describe the generalizations of Theorem
\ref{theorem:irredlattices} that were alluded to in the
introduction. The first concerns so-called $S$-arithmetic
lattices. We let $A$ be a finite index set and for each
$\alpha{\in}A$, let $k_{\alpha}$ be a local field and
$\Ga_{\alpha}$ an algebraic group over $k_{\alpha}$. We then let
$G=\prod_A\Ga_{\alpha}(k_{\alpha})$. We call a lattice $\G<G$
irreducible if the projection to each $\Ga_{\alpha}(k_{\alpha})$
is dense. If $|A|>1$ and $\G$ is irreducible, then $\G$ is {\em
arithmetic} in the sense of \cite{Ma}, see \cite[Chapter VIII]{Ma}
for a proof, discussion and examples. In this context, we have
\begin{theorem}
\label{theorem:sarithmetic} Let
$G=\prod_A{\Ga_{\alpha}}(k_{\alpha})$ be as in the last paragraph,
$|A|>1$ and $\G<G$ an irreducible lattice. Then any isometric
action of $\G$ on a compact manifold is $C^{\infty,\infty}$
locally rigid.
\end{theorem}
\noindent The proof of Theorem \ref{theorem:irredlattices} given
above carries through verbatim in this case. The simplest example
of a new group to whose actions this applies is
$SL(2,\Za[\frac{1}{p}])$ and one can construct many more examples.
We remark here that this result, and Theorem
\ref{theorem:irredlattices}, are most interesting when the action
of $\G$ is defined by a homomorphism of $\Gamma$ to a compact
group with non-trivial connected component. There are many
examples of this type in the context of both theorems.
We now introduce some definitions to be able to state a more
general theorem for irreducible lattices in groups that are not a
priori algebraic. Following \cite{Md}, we say a locally compact
group $G$ has {\em few factors} if
\begin{enumerate}
\item Every non-trivial normal subgroup of $G$ is cocompact. \item
There are no non-trivial continuous homomorphisms
$G{\rightarrow}\Ra$. \item Every closed, normal, cocompact
subgroup of $G$ satisfies $(1)$ and $(2)$.
\end{enumerate}
\noindent Examples of groups with {\em few factors} include
topologically simple groups as well as groups
$\Ga_{\alpha}(k_{\alpha})$ above.
Given $G=G_1{\times}{\ldots}{\times}G_k$ of locally compact
groups, we call a lattice $\G$ {\em totally irreducible} if it's
projection to any $G_i$ is dense and it's projection to any proper
sub-product is not discrete. If $G$ is locally compact and
compactly generated and $\G<G$ is cocompact, then $\G$ is finitely
generated. We require additional constraints on $\G$. Let $X$ be a
(right) Borel fundamental domain for $\G$ in $G$ and define
$\chi:G{\rightarrow}\G$ by the equation $g{\in}\chi(g)X$ for all
$g{\in}G$. For an element $\g{\in}\G$, we denote by $l(\g)$ the
word length of $\G$ with respect to some chosen generating set.
Then $\G$ in $G$ is said to be {\em square integrable} provided:
\begin{enumerate}
\item $\G$ is finitely presented and \item
$\int_Xl(\chi(g{\inv}h)dh<\infty$ for all $g$ in $G$. \item the
trivial representation of $G$ is isolated in $L^2(G/\Gamma)$ (i.e.
$\Gamma$ is weakly cocompact).
\end{enumerate}
\noindent This is slightly stronger than the definition in
\cite{Md} where $\G$ is only assumed finitely generated. We can
now state our most general result for irreducible lattices in
products.
\begin{theorem}
\label{theorem:generalproducts} Let $I$ be a finite index set and
let $G_i$ be a locally compact, compactly generated group with few
factors. Let $\G<G=\prod_IG_i$ by a totally irreducible, square
integrable, lattice. Then any isometric $\G$ action on a compact
manifold is $C^{\infty,\infty}$ locally rigid.
\end{theorem}
\noindent The proof requires only a slight modification from the
proof of Theorem \ref{theorem:irredlattices}. If
$H^1(\G,V_j)\neq0$, then there is an infinite image linear
representation of $\G$. By \cite[Theorem 2.4]{Md} this implies
that $\G$ is $S$-arithmetic, which contradicts non-vanishing of
cohomology by \cite[Introduction, Theorem $(3)$]{Ma}. In fact, if
$\pi(\G)$ is infinite, then \cite[Theorem 2.4]{Md} implies that
Theorem \ref{theorem:generalproducts} reduces to Theorem
\ref{theorem:sarithmetic}. If $\pi(\G)$ is finite, then all that
remains is to verify \ref{criterion:vanishing}$(2)$. As remarked
earlier, this is trivial.
\subsection{Rigidity in tame subgroups of $\Diff^{\infty}(M)$.}
\label{subsection:subgroups}
In this subsection we discuss other versions of Theorem
\ref{theorem:implicit} and \ref{theorem:affine} for various other
tame Lie groups. It is possible to present an axiomatic version of
Theorem \ref{theorem:affine} stated entirely in terms of
homomorphisms $\pi$ from a finitely generated group $\G$ into a tame
Frechet Lie group $D$, given some conditions on the exponential map
of $D$ guaranteeing version of Lemmas \ref{lemma:quadraticerror},
\ref{lemma:differencequotient} and \ref{lemma:notaffine}. It seems
quite difficult to axiomatize what is needed prove Theorem
\ref{theorem:implicit} from Theorem \ref{theorem:affine}. In any
case, this type of generality seems to have no real utility as we
know of no natural tame Frechet Lie groups which are not subgroups
of $\Diff^{\infty}(M)$ for some manifold $M$. The most interesting
variant of Theorem \ref{theorem:implicit} that one can prove
concerns volume preserving diffeomorphisms. In what follow, we let
$\nu$ be the Riemannian volume form on $M$ and write
$\Vect_{\nu}^{\infty}(M)$ for divergence free vector fields and
$\Diff_{\nu}^{\infty}(M)$ for diffeomorphisms preserving volume.
\begin{theorem}
\label{theorem:implicitvolume} Let $\Gamma$ be a finitely
presented group, $(M,g)$ a compact Riemannian manifold and
$\pi:\G{\rightarrow}\Isom(M,g){\subset}\Diff_{\nu}^{\infty}(M)$ a
homomorphism. If $H^1(\G,\Vect_{\nu}^{\infty}(M))=0$, the
homomorphism $\pi$ is locally rigid as a homomorphism into
$\Diff_{\nu}^{\infty}(M)$.
\end{theorem}
\noindent{\bf Remarks:}\begin{enumerate} \item The proof of this
theorem is exactly the proof of Theorem \ref{theorem:implicit}
using the fact that $\Diff_{\nu}^{\infty}(M)$ is a closed tame
subgroup of $\Diff^{\infty}(M)$ with tangent space at the identity
$\Vect_{\nu}^{\infty}(M)$, see \cite[Theorem III.2.5.3]{Ha} and
the preceding text for discussion . \item This result yields
variants of Theorems \ref{theorem:isomrigid},
\ref{theorem:irredlattices}, \ref{theorem:clozel},
\ref{theorem:sarithmetic} and \ref{theorem:generalproducts} with
slightly stronger assumptions and conclusions. Namely if one
assumes that the perturbation of the action preserves the
Riemannian volume, then the conjugacy will also be a volume
preserving diffeomorphism. Even in the setting of Theorem
\ref{theorem:isomrigid}, it does not seem straightforward to prove
this from the techniques of \cite{FM2}. \item There is also an
analogous generalization of Theorem \ref{theorem:affine} to the
setting of volume preserving diffeomorphisms, whose statement we
leave to the interested reader. \item Whether there are other
versions of this result for diffeomorphisms preserving say a
contact or symplectic structure depends on whether the relevant
subgroups of the diffeomorphism groups are tame Lie groups. This
question appears to be open, see \cite[Problems III.2.5.5 and
III.2.5.6]{Ha}.
\end{enumerate}
One can also pursue other variants concerning tame Frechet groups
that arise as either sections of principal bundles or subgroups of
diffeomorphism groups preserving foliations. Variants of Theorems
\ref{theorem:affine} are straightforward, but the resulting
cohomological problems seem quite difficult.
\subsection{Fundamental groups of Kottwitz varieties}
\label{subsection:clozel}
In this subsection, we briefly describe the class of lattices to
which Theorem \ref{theorem:clozel} applies. See \cite[Sections
1.2 and 3.1]{Cl1} for more detailed discussion. Another useful
reference for details of some constructions is \cite[Chapter
10]{M}.
We first recall the basic objects from which we will construct
arithmetic lattices in $SU(p,q)$. Let $F$ be a totally real number
field $F$, and $L$ a totally imaginary quadratic extension of $F$.
We choose a central division algebra $D$ over $L$ of degree $n^2$
for $n{\geq}3$ with an antiinvolution $\tau$ whose restriction to
$L$ is the Galois automorphism of $L$ over $F$. We can then
define the group $U(F)=\{d{\in}D|d\tau(d)=1_D\}$. To construct a
lattice in $U(F)$, one chooses a subring ${\mathcal O}$ of $D$
that is a vector space lattice in $D$. These can be shown to
exist easily, see \cite[Lemma 10.40]{M}. The lattice is just
$\G_{\mathcal O}=U(F){\cap}{\mathcal O}$. The key fact
distinguishing the lattices we consider is that we consider only
those arising in $U(F)<D$ and not more general unitary groups $U$
contained in $M_{l{\times}l}(D)$.
To obtain lattices satisfying Theorem \ref{theorem:clozel}, we
need to impose some further restrictions on $D$. Our main
non-trivial assumption on $D$ is that at any place $\nu$ of $L$
the algebra $D_{\nu}=D{\otimes}_{L}L_{\nu}$ is either isomorphic
to $M_n(L_{\nu})$ or a division algebra. As remarked in
\cite{Cl1}, where this condition is called condition $(R)$, this
condition is trivial at infinite places and holds automatically in
at least one finite place. Also as remarked in \cite{Cl1}, this
condition may turn out to be unnecessary. Lastly, we need to
assume that $U(F)$ is isomorphic to $U(p,q){\times}K$ where
$p+q=n$ and $K$ is a product of copies of $U(n)$. This is
equivalent to condition $(K)$ in \cite{Cl1}.
Any finite index subgroup of $\Gamma_{\mathcal O}$ is a lattice in
$U(p,q)$. To apply the results of Clozel, we need to restrict our
attention to lattices that contain congruence subgroups, i.e.
subgroups obtained by reducing modulo a prime of $F$. The fact
that many groups satisfying these conditions exist, and even many
with $p=1$, is discussed in \cite[Section 3.1]{Cl1}. | 0.002328 |
Children of the revolution
Many in the channel have been rubbing their hands with anticipation at news of what the Rudd Government has dubbed a Digital Education Revolution. Neat slogans aside, new funding of $1.2 billion over five years is being earmarked for technology in education to target a growing need and desire for IT coming from the institutions themselves as well as from the business and wider worlds. | 0.816809 |
Last modified: 2006-07-15 by rick wyatt
Keywords: brockton | massachusetts |
Links: FOTW homepage | search | disclaimer and copyright | write us | mirrors
image from
See also:
A white flag, with a town seal in the center and the town name above and state name below. The seal bears an illustration of a native meeting a European and is surmounted by a yellow ribbon bearing the motto EDUCATION INDUSTRY
PROGRESS. The town name and dates are written around the edge. Brockton is in Plymouth County.
Source: Town home page
Dov Gutterman, 15 December 2002
The web page at shows this flag with a narrow gold
border.
Valentin Poposki, 28 September 2005 | 0.037 |
TITLE: Why is $m_{\ell}$ called the magnetic quantum number? What is its association with magnets?
QUESTION [2 upvotes]: I am going over my quantum lecture notes and I can't seem to link the quantum number $m_{\ell}$ with any magnetic property. It just seems to specify the shape of an orbital with a particular principal quantum number. Is there any reason for it being labelled as magnetic?
REPLY [0 votes]: The wikipedia page linked in my2cts's answer says:
The revolution of an electron around an axis through another object, such as the nucleus, gives rise to the orbital magnetic dipole moment.
I would say that such a statement is not fully consistent with a truly quantum-mechanical description (did they wrote revolution?), or, at the least it would require a few additional words for its justification. Here they are.
States with non-zero magnetic quantum number correspond to complex wavefunctions such that the quantum probability current density
$$
{\bf j}({\bf r})=\frac{1}{2m}\left( \Psi^* {\bf \hat p} \Psi - \Psi {\bf \hat p} \Psi^* \right),
$$
is different from zero as well as the resulting electric current density and magnetic moment
$$
{\bf m}=\frac{e}{2}\int {\bf r}\times {\bf j}({\bf r}) d^{3}{\bf r},
$$
where $e$ is the electron charge.
By performing the integral in the case of the hydrogen atom wavefunctions, one can check the proportionality between magnetic moment and angular momentum via the Bohr's magneton ($\mu_B$). | 0.150114 |
A good vision is a reflection of our general health and well-being. In any case, more than 80 percent of the senses that we obtained are attributed to our eyesight. With elevated age, the readability of imaginative and prescient tends to lower regularly, which is mainly resulting from straining of the eyes and partly, due to our diet plan. One of the major causes of eye downside is staring on the laptop display screen or working in entrance of a computer, which is most typical in right this moment’s learning and dealing fashion. Warmth Therapy. By applying heat utilizing a heating pad or scorching water bottle, you need to see and really feel quick outcomes. Nonetheless, you’ll be able to take this a step further. Utilizing plastic wrap, include the heat from the heating pad by wrapping your belly with the plastic. It will velocity up the discount in swelling by centralizing the heat source right where you need it. Don’t worry about sweating some as a result of the ache relief that you just get will probably be properly value it.
This is a link to a New England Journal of drugs article and chart showing how Australia compares to the U.S. and Sweden. It does not present the ranking, however you could find it by poking around World Health Organization info. or with a Google search. I don’t have time to seek out the rating proper now. How did technology that has increased effectivity in each different trade turn into such a drag on health care? For starters, people who deal with sufferers did not design or select these methods. They had been foisted upon us.
Think about the biggest line gadgets within the 2016 national health-care price range, in response to Mr. Keehan and his colleagues: greater than $1 trillion for hospital care, $670 billion for doctor and clinician providers, $360 billion for medication. And evaluate the usually sorry outcomes: greater than 1 in four patients harmed while in the hospital; greater than 12 million critical diagnosis errors annually; a optimistic response rate of just 25% for sufferers on the top 10 prescription medications in gross sales.
To save lots of on prices the French government carried out a monotonous procedure, while Montefiore went state of the art. The sister died in 1993, and the brother, Paul died in 2011 of a coronary heart attack. He was my neighbor and a staunch defender of the free market. Step 5 of 6 – & iquest; You love black chocolate ? Excellent news, this ingredient helps relieve dry cough as a consequence of its content of theobromine. Simply eat 56 grams of black chocolate d RIVER to get to enhance your dry cough. This treatment may be combined with any other home alternate options if you wish to get a greater effect.
They’re sly and thieving; just imagine! However their petty thefts mark the beginning of a resistance which remains plenty. | 0.993664 |
BOSTON (AP) — The state’s short-lived attempt to tax computer and software services came to an end yesterday when Gov. Deval Patrick signed a bill repealing the unpopular tax after less than two months.
The governor’s office announced the signing in a one-sentence release with no comment from Patrick.
The House and Senate voted this week to approve the bill repealing the 6.25 percent tax, which was part of transportation financing package approved by lawmakers over the summer. The tax went into effect July 31 and drew intense criticism from technology companies that warned it would stifle innovation and cost jobs.
Patrick, who held a private meeting with business leaders and top lawmakers this month to discuss the so-called tech tax, had previously said he favored repeal, but was worried how the state would make up the estimated $161 million the tax was expected to generate for the state in the current fiscal year.
The legislature did not offer any alternative revenue source in the repeal bill. Democratic leaders pointed out that overall state tax collections were running higher than anticipated and would likely offset the loss from the tax.
Only one legislator, state Rep. Angelo Scaccia, D-Boston, voted against repeal — giving Patrick little recourse even had he sought to veto or amend the bill.
Ann Dufresne, a spokeswoman for the state revenue department, said 192 taxpayers had paid a total of $717,000 in taxes on computer and software services through Sept. 23.
The repeal bill included language requiring that any taxes already paid be refunded, and the department planned to post information on its website Friday explaining how vendors could apply for an abatement and return the tax payments to customers, Dufresne said.
Business groups including the Massachusetts Taxpayers Foundation were gathering signatures to put a repeal question on next year’s state ballot if lawmakers did not act first. | 0.650411 |
TITLE: Dividing a disk into $n$ equal area parts with lines that all intersect at the same point which is on the circle
QUESTION [2 upvotes]: Let us take an arbitrary point $A_1$ on the circle. Now, the task is to choose $(n-1)$ points $A_2,...A_n$ on the circle which are chosen in such a way that lines $A_1A_2,...,A_1A_n$ divide the corresponding disk into $n$ parts of equal area.
How to prove that this is always possible?
This seems intuitively so obvious but is there some, preferably simple way, to prove it?
We need not to construct the points $A_2,...A_n$, just prove that they exist.
REPLY [0 votes]: Let us work WLOG on the unit circle $S_1$ with point $P(-1,0)$.
The polar equation of $S_1$ with respect to pole $P$ is
$$\tag{1}r=r(\theta)=2 \cos(\tfrac{\theta}{2}), \ \ 0 \leq \theta \leq \pi.$$
Remark: in this way, due to the fact that the angle at the centre is twice the angle at the circumference (see Euclid's proof in (http://aleph0.clarku.edu/~djoyce/java/elements/bookIII/propIII20.html)), $\theta$ can be interpreted as the reference angle with respect to the center $O$ of $S_1$.
The sector area between $\theta_1$ and $\theta_2$ is known to be
$$\tag{2}A_{\theta_1,\theta_2}=\tfrac12 \int_{\theta_1}^{\theta_2}r(\theta)^2d\theta=\int_ {\theta_1}^{\theta_2}2\cos(\tfrac{\theta}{2})^2d\theta =\int_{\theta_1}^{\theta_2}(1+cos(\theta))d\theta = [\theta+\sin(\theta)]_{\theta_1}^{\theta_2}$$
Let us define function $\varphi$ by
$$\tag{3}\varphi(\theta):=\theta+\sin(\theta).$$
Thus, (2) becomes:
$$\tag{4}A_{\theta_1,\theta_2}=\varphi(\theta_2)-\varphi(\theta_1).$$
$\varphi$ is increasing from $0$ to $2 \pi$ because it has a positive derivative.
It suffices now to subdivide interval $[0,2\pi)$ with values
$$\tag{5}\theta_k:=\varphi^{-1}(\tfrac{2\pi k}{n}).$$
Remark: $\varphi^{-1}$ has no explicit expression (Inverse of $f(x)=\sin(x)+x$), but is connected to a form of Kepler's equation (https://en.wikipedia.org/wiki/Kepler%27s_equation) and to cycloids.
There thus no hope to have exact values for the $\theta_k$. Only a numerical approach is possible. Happily, one solve the issue by the very simple fixed point method. Indeed, as (5) is equivalent to:
$$\tag{6}\theta+sin(\theta)=\tfrac{2\pi k}{n}$$
itself equivalent to:
$$\tag{7}\theta=A-sin(\theta), \ \ \ \text{with} \ \ \ A:=\tfrac{2\pi k}{n},$$
the solution $x$ of (7) will be obtained as the limit of the fixed-point iteration sequence defined thus:
$$\tag{8}\theta_{i+1}=A-sin(\theta_i) \ \ \ \text{with} \ \ \ \theta_0=A.$$
(the convergence is granted by the fact that the derivative of the function in the RHS of (8) is $<1$ in open intervals around the roots we are looking for.) | 0.051516 |
Netflix’s upcoming crime anthology series, Seven Seconds, stars Regina King and Russell Hornsby as parents of a black teenager who was killed by a white police officer in a hit-and-run, and his fellow corrupt officers’ attempt to cover up the killing. The series, which was executive-produced by The Killing’s Veena Sud, was inspired by the death of Freddie Gray and the Black Lives Matter movement.
Fifteen-year-old Brenton Butler’s killing in the series riles up the community as his parents Latrice Butler (King) and Isaiah Butler (Hornsby) have their faith in the system and the church tested. With the help of Assistant Prosecutor K.J. Harper (Clare-Hope Ashitey), they attempt to find justice but eventually realize that justice isn’t one-size-fits-all.
Advertisement
Take a look at the exclusive clip below, as Latrice tries to figure out why a police officer would leave one of her son’s origami figures in his hospital room.
Seven Seconds hits Netflix on Feb. 23. Follow #SevenSeconds on Twitter and like its Facebook page. | 0.009929 |
TITLE: Make $2$ cubes out of $1729$ unit cubes, expected number of times you have to paint
QUESTION [15 upvotes]: I'm trying to solve question 6 from the PuMac 2007 Combinatorics A competition:
Joe has $1729$ randomly oriented randomly arranged unit cubes, which are initially unpainted. He makes two cubes of sidelengths $9$ and $10$ or of sidelengths $1$ and $12$ (randomly chosen). These cubes are dipped into white paint. Then two more cubes of sidelengths $1$ and $12$ or $9$ and $10$ are formed from the same unit cubes, again randomly oriented and randomly arranged, and dipped into paint. Joe continues this process until every side of every unit cube is painted. After how many times of doing this is the expected number of painted faces closest to half of the total?
Here's what I got so far:
${1\over2}$ chance of this happening: If you make two cubes of side lengths $9$ and $10$, then $16$ cubes will have $3$ faces sharing a vertex painted, $12(8 + 7) = 180$ cubes will have $2$ faces sharing an edge painted, $6(8^2 + 7^2) = 678$ cubes will have $1$ face painted, and the remaining $7^3 + 8^3 = 855$ cubes will have no faces painted.
${1\over2}$ chance of this happening: If you make two cubes of side lengths $1$ and $12$, then $1$ cube will have all $6$ faces painted, $8$ cubes will have $3$ faces sharing a vertex painted, $12(10) = 120$ cubes will have $2$ faces sharing an edge painted, $6(10^2) = 600$ cubes will have $1$ face painted, and the remaining $10^3 = 1000$ cubes will have no faces painted.
But I'm stuck as this point, and don't know what to do next. Any help would be well-appreciated.
REPLY [10 votes]: Thankfully we are being asked about the expected value of this random variable, the random variable is a sum of simpler random variables, namely whether each unit face remains unpainted or not, so we can exploit linearity of expectation.
If we are able to find a good expression for the expected value after $k$ steps then we must simply find which $k$ yields the value closest to $6\times 1729/2$
First note all $6\times 1729$ of the faces are identical. The expected number of painted faces after $k$ steps is equal to the probability $p_k$ that a face remains unpainted after $k$ steps multiplied by $6\times 1729$.
Hence we are simply asked to find the value of $k$ for which $p_k$ is closest to $\frac{1}{2}$.
Now, since every step is independent we begin by finding the probability $q$ that a face is painted in a single step. Assume we are in the $9,10$ case and notice that each possible location and orientation of the unit cube of our face is equally likely, so the probability that it is on the outside (and hence gets painted) is $\frac{6\times9^2 + 6\times 10^2}{6\times1729}$. For the other case we get $\frac{6\times 1^2 + 6\times 12^2}{6\times 1729}$, so $q = \frac{1}{2}\frac{6\times9^2 + 6\times 10^2}{6\times 1729} + \frac{1}{2}\frac{6\times 1^2 + 6\times 12^2}{6\times 1729} = \frac{7^2 + 10^2 + 1^2 +12^2}{2\times 1729} = 21/247 \approx 0.085$
Since $p_k$ is the probability that a face is never colored we have $p_k = (1-q)^k$, so we must solve for $\frac{1}{2} = (1-q)^k$ which yields a solution of 7.8. The integer value that gives the closest result to $\frac{1}{2}$ is $8$. | 0.184855 |
Actor who starred in the movie Ghosts of Das and has had other notable roles in La Hija del Mariachi and the telenovela Without Breasts There is No Paradise.
He made his debut in 1995 in the role of Vicente Marquez on Hard Times.
In 2008, he won Best Actor in a villain role at the India Catalina Awards for his work in La Hija del Mariachi.
He was in a relationship with Erika Rodriguez.
He is a Colombian born actor like actress Sofia Vergara of Modern Family fame. | 0.240989 |
Dozens of rockets as well as mortar rounds were fired from the Gaza Strip Wednesday at southern Israeli towns. Hours later the IDF fired tank shells at what it termed “terror targets” in the Strip..”
No Israeli casualties were initially reported in the largest attack from the Strip since Operation Pillar of Defense in late 2012.
Most of the rockets fell in open areas, though one rocket landed in the center of Sderot. In all, damage was reported in two impact sites.
Four consecutive alarms were heard by residents of Sderot as well as the nearby Eshkol Regional Council. Residents throughout the entire region were instructed to remain in protected areas. Explosions were heard in Sderot, Netivot and surrounding areas.
Some of the rockets were reported to have been launched from the east Gaza City neighborhood of Shuja’iyya.
IAF jets were reportedly flying over the Strip, apparently in an effort to thwart further rocket launches.
The army said three of the rockets were intercepted by the Iron Dome defense system.
Officers in the southern command were set to convene Wednesday evening to discuss possible responses to the attack.
Meanwhile, security agencies in the Strip evacuated their headquarters for fear of Israeli reprisal, Sky News reported.
An alarm was also heard in Beersheba, although it was unclear whether a rocket was actually fired at the major city, which is further away from the Strip than the smaller towns of Sderot and Netivot.
A trail of smoke from rockets fired by Palestinians from Gaza toward Israel is seen above Gaza City on Wednesday, March 12, 2014 (photo credit: AP/Adel Hana)
“I heard some explosions,” Ya’akov Shoshani, a resident of Netivot, told Channel 2. “I exited the taxi I was in and clung to the wall for safety.”
Rafi, a resident of Sderot, told the TV channel that a rocket had nearly hit his home. “A Kassam [rocket] fell next to my house, right on the sidewalk. Luckily there were no casualties.”
The Islamic Jihad took responsibility for firing some of the rockets at Israel, and indicated that the attack in a statement Wednesday, minutes after the rocket attack.
Following the attack, Prime Minister Benjamin.
Avi Issacharoff and AFP contributed to this report. | 0.007116 |
Azure Event Hubs metrics in Azure Monitor
Event Hubs metrics give you the state of Event Hubs resources in your Azure subscription. With a rich set of metrics data, you can assess the overall health of your event hubs not only at the namespace level, but also at the entity level. These statistics can be important as they help you to monitor the state of your event hubs..
Access metrics
Azure Monitor provides multiple ways to access metrics. You can either access metrics through the Azure portal, or use the Azure Monitor APIs (REST and .NET) and analysis solutions such as Log Analytics and Event Hubs. For more information, see Monitoring data collected by Azure Monitor.
Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to keep data for a longer period of time, you can archive metrics data to an Azure Storage account. This setting can select Metrics. To display metrics filtered to the scope of the event hub, select the event hub and then select Metrics.
For metrics supporting dimensions, you must filter with the desired dimension value as shown in the following example:
Billing
Using metrics in Azure Monitor is currently free. However, if you use other solutions that ingest metrics data, you may be billed by these solutions. For example, you are billed by Azure Storage if you archive metrics data to an Azure Storage account. You are also billed by Azure if you stream metrics data to Azure Monitor logs Event Hubs metrics is 1 minute.
Azure Event Hubs metrics
For a list of metrics supported by the service, see Azure Event Hubs
Note
When a user error occurs, Azure Event Hubs updates the User Errors metric, but doesn't log any other diagnostic information. Therefore, you need to capture details on user errors in your applications. Or, you can also convert the telemetry generated when messages are sent or received into application insights. For an example, see Tracking with Application Insights.
Azure Monitor integration with SIEM tools
Routing your monitoring data (activity logs, diagnostics logs, and so on.) to an event hub with Azure Monitor enables you to easily integrate with Security Information and Event Management (SIEM) tools. For more information, see the following articles/blog posts:
- Stream Azure monitoring data to an event hub for consumption by an external tool
- Introduction to Azure Log Integration
- Use Azure Monitor to integrate with SIEM tools
In the scenario where an SIEM tool consumes log data from an event hub, if you see no incoming messages or you see incoming messages but no outgoing messages in the metrics graph, follow these steps:
- If there are no incoming messages, it means that the Azure Monitor service isn't moving audit/diagnostics logs into the event hub. Open a support ticket with the Azure Monitor team in this scenario.
- if there are incoming messages, but no outgoing messages, it means that the SIEM application isn't reading the messages. Contact the SIEM provider to determine whether the configuration of the event hub those applications is correct.
Next steps
- See the Azure Monitoring overview.
- Retrieve Azure Monitor metrics with .NET sample on GitHub.
For more information about Event Hubs, visit the following links: | 0.198099 |
TITLE: My fun conjecture about linearly independence
QUESTION [6 upvotes]: In the $\mathbb{R}^n$ vector space, there are distinct $m$ vectors $v_i$'s ($1< i\leq m)$
such that each component has value 0 or 1.
Let $A_i$ be the set of $j$'s where $j$-th component of $v_i$ is 1.
Also, for each $i \neq j$, $A_i$ and $A_j$ has common $k$ elements. Where $k$ is a given integer $1\leq k <n$.
For example, when $n=3, k=1$. $v_1=(1,1,0), v_2=(1,0,1), v_3=(0,1,1)$ satisfy those conditions since $A_1=\{1,2\},A_2=\{1,3\},A_3=\{2,3\}$.
My conjecture is : those $v_i$'s are linearly independent.
With some rough programming, this conjecture was true when $n \leq 10$.
I tried to prove this conjecture with induction on $k$, but I failed.
*Some people misunderstood question.
Actually question is : For given $n,m,k$, is every families of vectors with above condition are linearly independent.
Can you prove or disprove this conjectrue?
REPLY [5 votes]: Your condition says that $\langle v_i,v_j\rangle = k$ when $i\ne j$, and let's write $\langle v_i,v_i\rangle = n_i$. Let $A$ be the matrix having the $v_i$ as columns. Then
$$A^TA = \begin{pmatrix}n_1 & k & k & \cdots & k \\
k & n_2 & k & \cdots & k \\
& & \ddots & & \\
k & k & k & \cdots & n_m
\end{pmatrix}$$
Note that at most one of the $n_i$ can be equal to $k$ because any two of them would have to be equal. Also note that we have to explicitly exclude the case that $v_i = 0$ for some $i$. Otherwise we are done: $A^TA$ is clearly invertible so $A$ can not have a non-trivial kernel, i.e. no non-trivial linear combination of its columns, the $v_i$, can give $0$. | 0.062285 |
Invicta Men's Pro Diver Watch
- Catalog #: 1681856722
- Automatic Movement
- Case diameter: 46mm
- Sapphire Crystal
- Gold case with Gold Tone band
- Water-resistant to 200 Meters / 656 Feet / 20 ATM
With a bold and masculine design, this Invicta diving watch handles the toughest of climates with versatility and ease.
- Display
- Analog Display
- Blue Dial
- Movement
- Automatic
- 200 Meters / 656 Feet / 20 ATM Water-resistant
- Case
- Round Shaped Gold
- Men's Size
- 46mm Wide (without crown)
- 14mm Thick
- Band
- Deployment Buckle
- Mens Standard Length
- Features
- Round Shaped
- Luminous
- Water-resistant
- Analog Display
Guaranteed Authentic Each Invicta Men's Pro Diver Watch #4619 4619
- GTIN 13 (UPC):
- GTIN 14 (UPC):
- Product Identifier
- Unique Product Number: 1681856722
- Invicta Men's Pro Diver 4619 Blue Gold Tone Automatic Watch | 0.001023 |
Can. during his tour in Toronto.
The source claims that: “Justin and my friend Michelle kissed behind the Four Seasons Hotel downtown a few months ago while he was in town for a concert. I didn’t feel right about releasing it before, but now its fine cuz its been a while.”
It seems that the Justin Bieber tours can offer fans more than a professional and show-stopping performance. Who could have thought that the almost inaccessible young star has a similar close relationship with his fans. Indeed, fans will definitely have a reaction to this.
Tabloids will spread the news and photo around the world offering us the chance to complete the profile of this young Casanova who arises some controversial feelings both in the media and among the millions of fans. Now, it's sure, girls have to deal not only with his affection for Selena Gomez but also with a few other ardent fans who might get a bit closer to their idol.
Image courtesy to Zacktaylor.ca
Add a Comment
Thank you for submission! Your comment will be displayed after getting approval from our administrators. | 0.002935 |
TITLE: Gauge transformations with varying phase give us conservation of the charge density. Hence charged particles cannot move?
QUESTION [3 upvotes]: I stumbled upon the following paragraph in Quark confinement and Topology of gauge theories by Polyakov
"Gauge invariance with constant phase $\Psi \to e^{i \alpha}$ lead to conservation of the
total charge. Gauge transformations with a varying phase $$\Psi \to e^{i \alpha(x)}$$ will give
us conservation of the charge density. But this in term means that the
charged particle cannot move. The only thing which saves the electron
from this fatal immobility is the degeneracy of the vacuum in QED,
that is, its non invariant under gauge transformations."
Are these statements correct? For example, I've never heard before that charge density is conserved due to local gauge invariance. Or that the QED vacuum isn't invariant under gauge transformations.
(The paper has almost 1500 citations, so I suppose his statements are correct. But I have never seen them anywhere else or any concrete calculations which backs them up.)
REPLY [1 votes]: As written, the claim is wrong.
Noether's theorem applied to gauge symmetries is more properly Noether's second theorem, and results in off-shell identities as opposed to the on-shell conservation laws of Noether's first theorem for global symmetries. That these identities are off-shell is another manifestation of gauge symmetries being a symptom of redundancy in our description of the physical system - off-shell identities are nothing else than dependencies between our chosen variables that have nothing to do with the dynamics of the system, and in principle one could use these identities to reduce the total number of variables, i.e. eliminate the redundancy.
As Qmechanic elucidates in detail in this excellent answer, the "second Noether current" vanishes off-shell and its charge is identically zero under reasonable assumptions, and for electrodynamics it is the trivial statement that $\partial_\mu \partial_\nu F^{\mu\nu} = 0$.
As for the claim that the QED vacuum is non-invariant under gauge transformations, that is of course also wrong. All physical states are invariant under gauge transformations by definition of a physical state, and vacuum should probably be a physical state. Even when the "gauge symmetry is spontaneously broken" (which is a phrase you definitely still hear even today), what is really being broken is its global part (it's local part cannot be broken, this is Elitzur's theorem). See also this excellent answer by Dominic Else. | 0.029399 |
The Daily Telegraph magazine - 14 May 1971 £32.99 The Daily Telegraph magazine - 14 May 1971 Good condition for age - yellowing to page edges POTTING CLAY PIGEONS - Today sees the start of the British Open clay pigeon shooting championships. A Magazine team, led by Anthony Haden-Guest, looked at why so many more people today want to take pot shots at little clay saucers2½ pages BLACK EXILES IN THE VELD - The armed police in the photograph are evicting black Africans from their homes in Harrismith, Orange Free State. Lorries carrying the Africans thus dispossessed were driven off to a site chosen by the Government. The woman and child pictured sit among their possessions after being removed from their house in an African township in Witwatersrand. Incidents like these are the result of the South African Government's policy of moving the black Africans from their homes and "consolidating" them in "Bantustans" and "homelands" like Limehill in Natal. The ultimate aim is complete elimination of a "chessboard" distribution of black and white. In the next ten years, over half a million Africans in the Eastern Cape alone will have been resettled. Cosmas Desmond, a Franciscan priest has become a crusader the right of the Africans to live on their own land. This is his account of the South African solution to the Black Spots5¾ pages STANDING THE WORLD ON ITS HEAD - Enthusiasts are rediscovering the work of a Dutch-American cartoonist who, 68 years ago, developed a newspaper strip whose complexities would tax a computer. Ian Ball joins the search for Gustave Verbeek and the odd method by which he contrived a strip which could be read from top and from bottom. Some modern cartoonists also give us their opinions3¼ pages RE-BIRTH OF ANCIENT MONUMENT - Les Halles, the maze of narrow streets that has been Paris's market-place for centuries, is dead. Yet it has never been more full of life. Antique fairs, sex shops, boutiques and even a skting rink have taken the place of fruit and vegetable stalls. And Sean Hignett has found that conservationists seem to be winning their battle against a city authority intent on re-development5¼ pages THE SAD DEMISE OF CRENSHAW, APPLEGATE AND PENFOLD - A story by Calvin Trillin2 pages ...AND MAN REMADE THE FIRMAMENT - Adrian Berry concludes his science series with a report on the most fantastic plan yet - to redesign the Solar System in order to accommodate an expanding civilisation. The scheme involves commandeering thousands of tiny planets or "asteroids" and dismantling the giant planet Jupiter. The inspiration behind it comes from a distinguished Anglo-American scientist, Professor Freeman J. Dyson3¾ pages LONG-TERM ROAD TEST No.5 - Volkswagen 1600 TL fastback1 page | 0.03808 |
Ebookby Reynold 3.4
;An ebook of this is the 2017Ryan movies of scarcity weeks in phone. existing many passages( n't normal Brownian F) notice right reached updated as a needs to say this inside, and the owner of case for a interested fall has not doing to degenerate thus mandatory. Selfsimilarity is into the Democracy in book between the state under a unpublished request plug-in and the economic download Not been in adalah, a ALS review button that is a Nowhere vascular gene with historical tests. After a same common composite, this page deserves the Religious shelf of caption about influential alternatives and their options.In ebook Теория и to determine a Secure PDF, you will be to Search the FileOpen Plug-In on your dysphagia. The FileOpen Plug-In yields with Adobe Reader and free planets. algebra Democracy to increase the private understanding. Currently are that some questions - trying BOMA, IADC and NRC - need anywhere download review of their Books. techniques with a s ebook Теория и практика, said j introduction l, have a version that may make especially longer. When clinically UMNs have written, the system makes measured Featured common decrease( PLS). The ME of PLS has from that of complacency and has probably looked in details. n't, the treatment is registered to complex children, in which tour it is divided ordinary normal mission( PBP).
Discover Advanced Uninstaller PRO. register your download to be Public with the Goodreads's Internet and account of Data such. sugar on the General Tools time. project on the Uninstall Programs referral.
You may know been the ebook Теория и book fully. Please contact the integer you took. place thinking“ 2017 BBC. The BBC has All equivocal for the ALS of heavy studies. expressed about our complacence to annual disrupting. Your wireless were a witness that this l could long bring. We must Overall do our visual road in Mormon minutes( Proverbs 3:5, 6).
| 0.020338 |
TITLE: Domain of solution of differential equation $y’=\frac{1}{(y+7)(t-3)}$
QUESTION [5 upvotes]: If we are given the initial-value problem $$\frac{dy}{dt}=\frac{1}{(y+7)(t-3)}, y(0)=0$$
I want to solve said initial value problem & state the domain of the solution. I also want to observe what happens when $t$ approaches the limits of the solution’s domain.
Via separation of variables, one obtains: $$\implies(y+7) \space dy = \frac{dt}{t-3}\implies\frac{1}{2}y^2+7y=\ln|t-3|+c_1$$ $$\iff y^2+14y+49=2\ln|t-3|+(2c_1+49)$$
Now call $C=2c_1+49$, then $$(y+7)^2=2\ln|t-3|+C\implies y(t) = \pm\sqrt{2\ln|t-3|+C}-7$$
Substituting the initial condition:
$$0=\pm\sqrt{2\ln|0-3|+C}-7\iff 7 = \pm\sqrt{2\ln(3)+C}$$
This shows we must choose the positive square root in order for our solution $y(t)$ to pass through the initial condition & solve the IVP. Then $$C=49-2\ln(3)$$ so that
$$y(t)=\sqrt{2\ln\left|\frac{t-3}{3}\right|+49}-7$$
We know from the original differential equation that $y(t)\neq-7$ & $t\ne3$. Thus: $$-7\neq \sqrt{2\ln\left|\frac{t-3}{3}\right|+49}-7 \iff\ln\left|\frac{t-3}{3}\right|\neq-\frac{49}{2}\iff t\neq\pm 3\exp\left(-\frac{49}{2}\right)+3$$
Does this tell us that the domain of the solution has to be $(-\infty,-3\exp\left(-\frac{49}{2}\right)+3)$ in order for $t=0$ to be on the domain of this solution (so the solution passes through the initial condition)?
Would we just say $y(t)\to0$ as $t\to-3\exp\left(-\frac{49}{2}\right)+3$?
Then finally $y(t)=\sqrt{2\ln\left(1-\frac{t}{3}\right)+49}-7$, for $t\in(-\infty, -3\exp\left(-\frac{49}{2}\right)+3)$.
REPLY [0 votes]: The equation in question is $$y'(t)=\frac1{[y(t)+7](t-3)}.$$ Thus, there is a singularity at $t=3,$ which suggests that the solutions will have a domain that is a subset of $(-\infty,3)\cup(3,\infty).$ To solve the equation, we consider $$[y(t)+7]y'(t)=\frac1{t-3}.$$ Notice that $$\left(\frac{y^2}2+7y\right)'(t)=[y(t)+7]y'(t).$$ Therefore, we have that $$\frac{y(t)^2}2+7y(t)=\begin{cases}\ln(3-t)+A&t\lt3\\\ln(t-3)+B&t\gt3\end{cases}.$$ Given the initial condition $y(0)=0,$ one has that $$\frac{y(0)^2}2+7y(0)=0=\ln(3)+A,$$ implying $A=-\ln(3).$ Therefore, $$\frac{y(t)^2}2+7y(t)=\begin{cases}\ln(3-t)-\ln(3)&t\lt3\\\ln(t-3)+B&t\gt3\end{cases},$$ which is equivalent to $$y(t)^2+14y(t)=\begin{cases}2\ln(3-t)-2\ln(3)&t\lt3\\2\ln(t-3)+2B&t\gt3\end{cases},$$ which is equivalent to $$[y(t)+7]^2=\begin{cases}2\ln(3-t)-2\ln(3)+49&t\lt3\\2\ln(t-3)+2B+49&t\gt3\end{cases}.$$ Here, it gets complicated. It is required that $$2\ln(3-t)-2\ln(3)+49\geq0$$ and $$2\ln(t-3)+2B+49\geq0.$$ This implies, respectively, that $$\ln(3-t)\geq\ln(3)-\frac{49}2$$ and $$\ln(t-3)\geq{B}-\frac{49}2.$$ Therefore, $$3-t\geq3\exp\left(-\frac{49}2\right)$$ and $$t-3\geq\exp\left(B-\frac{49}2\right)=C\exp\left(-\frac{49}2\right)$$ with $C\gt0.$ Therefore, $$t\leq3-3\exp\left(-\frac{49}2\right)$$ and $$t\geq3+C\exp\left(-\frac{49}2\right).$$ However, at the endpoints, the function is not differentiable, so the domain of the function is $\left(-\infty,3-3\exp\left(-\frac{49}2\right)\right)\cup\left(3+C\exp\left(-\frac{49}2\right),\infty\right).$
With this in mind, the solutions to the equation are given by the two families $$y(t)=-7-\sqrt{f(t)}$$ and $$y(t)=-7+\sqrt{f(t)},$$ where $f$ is the family of functions $$f(t)=\begin{cases}2\ln(3-t)-2\ln(3)+49&t\lt3\\2\ln(t-3)+2B+49&t\gt3\end{cases}.$$ | 0.105652 |
PlayStation Classic
Price: S$139.00
Manufacturer: PlayStation
Estimated Release Date: 3rd December 2018
Item Description
The design of the PlayStation®Classic resembles PlayStation®, including the button layout, as well as controllers and outer package*2, but in a miniature size. It is 45% smaller on the sides and 80% smaller in volume than the original console. Pre-loaded*3 with 20 PlayStation® games*4 such as Final Fantasy® VII (SQUARE ENIX Co., LTD.), Jumping Flash! (SIE), R4 RIDGE RACER TYPE 4, Tekken 3 (BANDAI NAMCO Entertainment Inc.), and Wild Arms (SIE). PlayStation®Classic is perfect for anyone including current PlayStation® fans as well as nostalgic fans that enjoyed playing the original PlayStation® and gamers new to PlayStation® who want to experience classic PlayStation® games from the 1990s.
Contents:
- PlayStation®Classic x 1
- Controller x 2
- HDMI™ Cable x 1
- USB Cable*4 x 1
- Printed Materials
Additional info:
*2 Design of outer box differs depending on which country / region the product is sold in.
*3 The title line-up other than the listed five titles may differ depending on which country / region the product is sold in.
*4 Software titles cannot be added on via download or any other way. Peripherals for PlayStation® such as Memory Cards, cannot be used with PlayStation Classic.. | 0.000936 |
TITLE: Reference for "topos obtained by adjoining an indeterminate set' theorem
QUESTION [9 upvotes]: From Lawvere's Continuously variable sets; algebraic geometry = geometric logic:
The following illuminating fact about topoi (long known for the case
$\mathsf S$=constant sets) was (conjectured by me and) proved by Gavin
Wraith for any base topos having a natural-numbers object.
Theorem 6. Suppose $\mathsf S$ is a topos having a natural-numbers object. Then there is a topos $\mathsf S[T]$ over $\mathsf S$
'obtained by adjoining an indeterminate set $T$' such that for any
topos $\mathsf X$ over $\mathsf S$ there is an equivalence
$$\mathsf{Topos}_{/\mathsf S}(\mathsf X,\mathsf
S[T])\overset{\simeq}{\longrightarrow}\mathsf X$$of categories
(defined by $f\leadsto f^\ast T$). Specifically, $\mathsf S[T]$ is the
(internal) functor category $\mathsf S^{\mathbb S_0}$, where $\mathbb
S_0$ is a category object in $\mathsf S$ which may be interpreted as
the category of finite sets with $\mathbb
S_0\overset{T}{\longrightarrow} \mathsf S$ interpreted as the full
inclusion.
Where can I find a reference for this theorem and its proof?
Suppose $\mathsf S=\mathsf{Set}$. What is $T$? Could it be "nothing"? That is, could the equivalence be true without writing $T$ at all? What's the intuition?
What are some interesting consequences of this theorem?
REPLY [7 votes]: Here is some intuition. You can think of the opposite of the 2-category of Grothendieck topoi (that is, a morphism $f : X \to Y$ between topoi is an exact left adjoint) as a categorification of the category of commutative rings, where
Colimits categorify addition,
Finite limits categorify multiplication, and
Sheaves of sets on spaces categorify functions.
(A much more precise statement is that topoi with these morphisms categorify frames, but commutative rings are more familiar in a useful way.) Note, for example, that because topoi are cartesian closed, finite products distribute over colimits.
In this 2-category $\text{Set}$ is the initial object, so it categorifies the commutative ring $\mathbb{Z}$; the whole theory is "$\text{Set}$-linear." This may be clearer if you think of $\text{Set}$ as the topos of sheaves on a point.
$\text{Set}[T]$ then categorifies the polynomial ring $\mathbb{Z}[T]$ - it's the free topos on an object - and what the theorem says is that $\text{Set}[T]$ exists and can be explicitly realized as the functor category $[\text{FinSet}, \text{Set}]$. Loosely speaking, if $F$ is such a functor, the values $F(n)$ it takes on sets of size $n$ (which I am writing just "$n$" by abuse of notation) are the "coefficients" of the corresponding "polynomial." This can be made precise by writing every such functor $F$ as a weighted colimit of representable presheaves on $\text{FinSet}^{op}$ in the usual way, which here looks like (after messing with some $^{op}$s)
$$F(X) \cong \int^{n \in \text{FinSet}} F(n) \times X^n$$
where $X \in \text{FinSet}$. This coend also describes more generally how to compute the image of $F$ under the exact left adjoint $f : \text{Set}[T] \to C$ where $C$ is a topos and $f$ classifies an object $X \in C$; here $F(n) \times X^n$ should be understood as the tensoring, so it refers to $\coprod_{F(n)} X^n$.
(What this shows is that $S[T]$ is somewhat misleading notation for this topos, if the notion of morphism between topoi you're working with is geometric morphisms; it conflates the algebraic (topoi as "commutative rings") and geometric (topoi as "affine schemes") points of view. It would be nice to have two different words for topoi considered in these two senses, analogous to the distinction between affine schemes and commutative rings, and the distinction between locales and frames.)
To start understanding this result, the first observation is that $\text{FinSet}^{op}$ itself has an interesting universal property: it's the free category with finite limits on an object. This is a categorification of the free commutative monoid on a point, namely $\mathbb{N}$, which we then take the free abelian group / monoid ring on to get the free commutative ring on a point; this gets categorified to taking presheaves.
Once you believe this universal property then as mentioned in the comments the desired result follows from Diaconescu's theorem, which you can think of as a categorification of the universal property of the monoid ring.
A generalization of this perspective, where we replace cartesian monoidal categories with symmetric monoidal categories, is sometimes called "2-affine algebraic geometry," and is also a generalization of Tannaka duality; see for example Chirvasitu and Johnson-Freyd's The fundamental pro-groupoid of an affine 2-scheme or Brandenburg's Tensor categorical foundations of algebraic geometry. | 0.012896 |
Thank you.
Your information has been sent to 0 out of 16 agents who have property to rent in the area you specified. | 0.064894 |
Electric Fuel Pump Fuel Injection OEM 280Z 280ZX 75-83
Part # : 200-696
Datsun 280Z electric fuel pump.
$265.00 (except turbo)..
Write Your Own Review | 0.010034 |
\begin{document}
\begin{abstract}
We introduce analogues of the Hopf algebra of Free quasi-symmetric functions
with bases labelled by colored permutations.
As applications, we recover in a simple way the descent algebras associated
with wreath products $\Gamma\wr\SG_n$ and the corresponding generalizations of
quasi-symmetric functions. Finally, we obtain Hopf algebras of colored parking
functions, colored non-crossing partitions and parking functions of type $B$.
\end{abstract}
\maketitle
\section{Introduction}
The Hopf algebra of Free quasi-symmetric functions $\FQSym$ \cite{NCSF6}
is a certain algebra of noncommutative polynomials associated with the
sequence $(\SG_n)_{n\geq0}$ of all symmetric groups.
It is connected by Hopf homomorphisms to several other important algebras
associated with the same sequence of groups : Free symmetric functions (or
coplactic algebra) $\FSym$ \cite{PR,NCSF6}, Non-commutative symmetric
functions (or descent algebras) $\NCSF$ \cite{NCSF1}, Quasi-Symmetric
functions $\QSym$ \cite{Ge84}, Symmetric functions $\sym$, and also, Planar
binary trees $\PBT$ \cite{LR1,HNT}, Matrix quasi-symmetric functions $\MQSym$
\cite{NCSF6,Hiv}, Parking functions $\PQSym$ \cite{KW,NT}, and so on.
Among the many possible interpretations of $\sym$, we can mention the
identification as the representation ring of the tower of algebras
\begin{equation}
\C \to \C\SG_1 \to \C\SG_2 \to \cdots \to \C\SG_n \to \cdots,
\end{equation}
that is
\begin{equation}
\sym \simeq \oplus_{n\geq0} R(\C\SG_n),
\end{equation}
where $R(\C\SG_n)$ is the vector space spanned by isomorphism classes of
irreducible representations of $\SG_n$, the ring operations being induced by
direct sum and outer tensor product of representations \cite{Mcd}.
Another important interpretation of $\sym$ is as the support
of Fock space representations of
various infinite dimensional Lie algebras, in particular as the level $1$
irreducible highest weight representations of $\glchap_\infty$ (the infinite
rank Kac-Moody algebra of type $A_\infty$, with Dynkin diagram $\Z$,
see~\cite{Kac}).
The analogous level $l$ representations of this algebra can also be naturally
realized with products of $l$ copies of $\sym$, or as symmetric functions in
$l$ independent sets of variables
\begin{equation}
(\sym)^{\otimes l} \simeq \sym(X_0 ; \ldots ; X_{l-1}) =: \sym^{(l)},
\end{equation}
and these algebras are themselves the representation rings of wreath product
towers $(\Gamma\wr\SG_n)_{n\geq0}$, $\Gamma$ being a group with $l$ conjugacy
classes \cite{Mcd} (see also \cite{Zel,Wang}).
We shall therefore call for short $\sym(X_0 ; \ldots ; X_{l-1})$ the algebra
of symmetric functions of level $l$.
Our aim is to associate with $\sym^{(l)}$ analogues of the various Hopf
algebras mentionned at the beginning of this introduction.
We shall start with a level $l$ analogue of $\FQSym$, whose bases are labelled
by $l$-colored permutations. Imitating the embedding of $\NCSF$ in $\FQSym$,
we obtain a Hopf subalgebra of level $l$ called $\NCSF^{(l)}$, which turns out
to be dual to Poirier's quasi-symmetric functions, and whose homogenous
components can be endowed with an internal product, providing an analogue
of Solomon's descent algebras for wreath products.
The Mantaci-Reutenauer descent algebra arises as a natural Hopf subalgebra of
$\NCSF^{(l)}$ and its dual is computed in a straightforward way by means of an
appropriate Cauchy formula.
Finally, we introduce a Hopf algebra of colored parking functions,
and use it to define Hopf algebras structures on parking functions
and non-crossing partitions of type $B$.
{\it Acknowledgements} This research has been partially supported
by EC's IHRP Programme, grant HPRN-CT-2001-00272, ``Algebraic Combinatorics
in Europe".
\section{Free quasi-symmetric functions of level $l$}
\subsection{$l$-colored standardization}
We shall start with an $l$-colored alphabet
\begin{equation}
A = A^0 \sqcup A^1 \sqcup \cdots \sqcup A^{l-1},
\end{equation}
such that all $A^i$ are of the same cardinality $N$, which will be assumed to
be infinite in the sequel.
Let $C$ be the alphabet $\{c_0,\ldots,c_{l-1}\}$ and $B$ be the auxiliary
ordered alphabet $\{1,2,\ldots,N\}$ (the letter $C$ stands for \emph{colors}
and $B$ for \emph{basic}) so that $A$ can be identified to the cartesian
product $B\times C$:
\begin{equation}
A \simeq B \times C = \{ (b,c), b\in B,\ c\in C \}.
\end{equation}
Let $w$ be a word in $A$, represented as $(v,u)$
with $v\in B^*$ and $u\in C^*$. Then the \emph{colored standardized word}
$\cstd(w)$ of $w$ is
\begin{equation}
\cstd(w) := (\Std(v),u),
\end{equation}
where $\Std(v)$ is the usual standardization on words.
Recall that the standardization process sends a word $w$ of length $n$ to a
permutation $\Std(w)\in\SG_n$ called the \emph{standardized} of $w$ defined as
the permutation obtained by iteratively scanning $w$ from left to right, and
labelling $1,2,\ldots$ the occurrences of its smallest letter, then numbering
the occurrences of the next one, and so on. Alternatively, $\Std(w)$ is the
permutation having the same inversions as $w$.
\subsection{$\FQSym^{(l)}$ and $\FQSym^{(\Gamma)}$}
A \emph{colored permutation} is a pair $(\sigma,u)$, with $\sigma\in\SG_n$ and
$u\in C^n$, the integer $n$ being the \emph{size} of this permutation.
\begin{definition}
The \emph{dual free $l$-quasi-ribbon} $\G_{\sigma,u}$ labelled by a
colored permutation $(\sigma,u)$ of size $n$ is the noncommutative
polynomial
\begin{equation}
\G_{\sigma,u} := \sum_{w\in A^n ; \cstd(w)=(\sigma,u)} w \quad\in\Z\free{A}.
\end{equation}
\end{definition}
Recall that the \emph{convolution} of two permutations $\sigma$ and $\mu$ is
the set $\sigma\convol\mu$ (identified with the formal sum of its elements)
of permutations $\tau$ such that the standardized word of the $|\sigma|$ first
letters of $\tau$ is $\sigma$ and the standardized word of the remaining
letters of $\tau$ is $\mu$ (see~\cite{Reu}).
\begin{theorem}
\label{prodG}
Let $(\sigma',u')$ and $(\sigma'',u'')$ be colored permutations.
Then
\begin{equation}
\G_{\sigma',u'}\,\,\G_{\sigma'',u''} = \sum_{\sigma\in \sigma'\convol\sigma''}
\G_{\sigma,u'\cdot u''},
\end{equation}
where $w_1\cdot w_2$ is the word obtained by concatenating $w_1$ and $w_2$.
Therefore, the dual free $l$-quasi-ribbons span a $\Z$-subalgebra of the free
associative algebra.
\end{theorem}
Moreover, one defines a coproduct on the $\G$ functions by
\begin{equation}
\label{deltaG}
\Delta \G_{\sigma,u} := \sum_{i=0}^n
\G_{(\sigma,u)_{[1,i]}} \otimes \G_{(\sigma,u)_{[i+1,n]}},
\end{equation}
where $n$ is the size of $\sigma$ and $(\sigma,u)_{[a,b]}$ is the standardized
colored permutation of the pair $(\sigma',u')$ where $\sigma'$ is the subword
of $\sigma$ containing the letters of the interval $[a,b]$, and $u'$ the
corresponding subword of $u$.
For example,
\begin{equation}
\begin{split}
\Delta \G_{3142,2412} =
&\ 1\otimes \G_{3142,2412} + \G_{1,4}\otimes \G_{231,212} +
\G_{12,42}\otimes \G_{12,21} \\
& + \G_{312,242}\otimes \G_{1,1} + \G_{3142,2412}\otimes 1.
\end{split}
\end{equation}
\begin{theorem}
The coproduct is an algebra homomorphism, so that $\FQSym^{(l)}$ is a
graded bialgebra. Moreover, it is a Hopf algebra.
\end{theorem}
\begin{definition}
The \emph{free $l$-quasi-ribbon} $\F_{\sigma,u}$ labelled by a colored
permutation $(\sigma,u)$ is the noncommutative polynomial
\begin{equation}
\F_{\sigma,u} := \G_{\sigma^{-1},u\cdot\sigma^{-1}},
\end{equation}
where the action of a permutation on the right of a word permutes the
positions of the letters of the word.
\end{definition}
For example,
\begin{equation}
\F_{3142,2142} = \G_{2413,1422}\,.
\end{equation}
The product and coproduct of the $\F_{\sigma,u}$ can be easily described in
terms of shifted shuffle and deconcatenation of colored permutations.
Let us define a scalar product on $\FQSym^{(l)}$ by
\begin{equation}
\langle \F_{\sigma,u} , \G_{\sigma',u'} \rangle :=
\delta_{ \sigma,\sigma'} \delta_{u,u'},
\end{equation}
where $\delta$ is the Kronecker symbol.
\begin{theorem}
For any $U,V,W\in\FQSym^{(l)}$,
\begin{equation}
\langle \Delta U, V\otimes W \rangle =
\langle U, V W \rangle,
\end{equation}
so that, $\FQSym^{(l)}$ is self-dual: the map
$\F_{\sigma,u} \mapsto {\G_{\sigma,u}}^*$ is an isomorphism from
$\FQSym^{(l)}$ to its graded dual.
\end{theorem}
\begin{note}{\rm
Let $\bij$ be any bijection from $C$ to $C$, extended to words by
concatenation. Then if one defines the free $l$-quasi-ribbon as
\begin{equation}
\F_{\sigma,u} := \G_{\sigma^{-1},\bij(u)\cdot\sigma^{-1}},
\end{equation}
the previous theorems remain valid since one only permutes the labels of
the basis $(\F_{\sigma,u})$.
Moreover, if $C$ has a group structure, the colored permutations
$(\sigma,u)\in\SG_n\times C^n$ can be interpreted as elements of the
semi-direct product $H_n := \SG_n\ltimes C^n$ with multiplication rule
\begin{equation}
(\sigma ; c_1,\ldots,c_n) \cdot (\tau ; d_1,\ldots,d_n) :=
(\sigma\tau ; c_{\tau(1)}d_1, \ldots, c_{\tau(n)}d_n).
\end{equation}
In this case, one can choose $\bij(\gamma):=\gamma^{-1}$ and define the scalar
product as before, so that the adjoint basis of the $(\G_{h})$ becomes
$\F_h := \G_{h^{-1}}$.
In the sequel, we will be mainly interested in the case $C:=\Z/l\Z$, and we
will indeed make that choice for $\bij$.
}
\end{note}
\subsection{Algebraic structure}
Recall that a permutation $\sigma$ of size $n$ is \emph{connected}
\cite{MR,NCSF6} if, for any $i<n$, the set $\{\sigma(1),\ldots,\sigma(i)\}$ is
different from $\{1,\ldots,i\}$.
We denote by $\conn$ the set of connected permutations, and by
$c_n:=|\conn_n|$ the number of such permutations in $\SG_n$. For later
reference, we recall that the generating series of $c_n$ is
\begin{equation*}
c(t) := \sum_{n\ge 1} c_n t^n
= 1 - \left(\sum_{n\ge 0} n! t^n\right)^{-1}\\
= t+{t}^{2}+3\,{t}^{3}+13\,{t}^{4}+71\,{t}^{5}+461\,{t}^{6} +O(t^7)\,.
\end{equation*}
Let the \emph{connected colored permutations} be the $(\sigma,u)$ with
$\sigma$ connected and $u$ arbitrary. Their generating series is given by
$c(lt)$.
It follows from \cite{NCSF6} that $\FQSym^{(l)}$ is free over the set
$\G_{\sigma,u}$ (or $\F_{\sigma,u}$), where $(\sigma,u)$ is connected.
Since $\FQSym^{(l)}$ is self-dual, it is also cofree.
\subsection{Primitive elements}
Let $\L^{(l)}$ be the primitive Lie algebra of $\FQSym^{(l)}$.
Since $\Delta$ is not cocommutative, $\FQSym^{(l)}$ cannot be the universal
enveloping algebra of $\L^{(l)}$.
But since it is cofree, it is, according to~\cite{LRdip}, the universal
enveloping dipterous algebra of its primitive part $\L^{(l)}$.
Let $d_n = \dim\, \L^{(l)}_n$.
Recall that the \emph{shifted concatenation} $w\bullet w'$ of two elements $w$
and $w'$ of $\N^*$, is the word obtained by concatenating to $w$ the word
obtained by shifting all letters of $w'$ by the length of $w$. We extend
it to colored permutations by simply concatenating the colors and
concatenating \emph{with shift} the permutations.
Let $\G^{\sigma,u}$ be the multiplicative basis defined by
$\G^{\sigma,u}=\G_{\sigma_1,u_1}\cdots\G_{\sigma_r,u_r}$ where
$(\sigma,u)=(\sigma_1,u_1)\bullet\cdots\bullet(\sigma_r,u_r)$ is the unique
maximal factorization of $(\sigma,u)\in\SG_n\times C^n$ into connected colored
permutations.
\begin{proposition}
Let $\V_{\sigma,u}$ be the adjoint basis of $\G^{\sigma,u}$.
Then, the family $(\V_{\alpha,u})_{\alpha\in\conn}$ is a basis of $\L^{(l)}$.
In particular, we have $d_n=l^n c_n$.
\end{proposition}
As in~\cite{NCSF6}, we conjecture that $\L^{(l)}$ is free.
\section{Non-commutative symmetric functions of level $l$}
Following McMahon~\cite{McM}, we define an \emph{$l$-partite number} $\npn$
as a column vector in $\N^l$, and a \emph{vector composition of $\npn$} of
weight $|\npn|:=\sum_{1}^l{n_i}$ and length $m$ as a $l\times m$ matrix
$\bf I$ of nonnegative integers, with row sums vector $\npn$ and no zero
column.
For example,
\begin{equation}
\label{exM}
{\bf I} =
\begin{pmatrix}
1 & 0 & 2 & 1 \\
0 & 3 & 1 & 1 \\
4 & 2 & 1 & 3 \\
\end{pmatrix}
\end{equation}
is a vector composition (or a $3$-composition, for short)
of the $3$-partite number
\scriptsize$\begin{pmatrix} 4\\ 5\\ 10\end{pmatrix}$ \normalsize
of weight $19$ and length $4$.
For each $\npn\in\N^l$ of weight $|\npn|=n$, we define a level $l$
\emph{complete homogeneous symmetric function} as
\begin{equation}
S_{\npn} := \sum_{u ; |u|_i=n_i} \G_{1\cdots n, u}.
\end{equation}
It is the sum of all possible colorings of the identity permutation with $n_i$
occurrences of color $i$ for each $i$.
Let $\NCSF^{(l)}$ be the subalgebra of $\FQSym^{(l)}$ generated by the
$S_{\npn}$ (with the convention $S_{\bf 0}=1$).
The Hilbert series of $\NCSF^{(l)}$ is easily found to be
\begin{equation}
S_l(t) := \sum_{n} {\dim\ \NCSF_n^{(l)}t^n} = \frac{(1-t)^l}{2(1-t)^l-1}.
\end{equation}
\begin{theorem}
$\NCSF^{(l)}$ is free over the set $\{S_{\npn}, |\npn|>0 \}$.
Moreover, $\NCSF^{(l)}$ is a Hopf subalgebra of $\FQSym^{(l)}$.
The coproduct of the generators is given by
\begin{equation}
\Delta S_\npn = \sum_{{\bf i}+{\bf j}= {\bf n}} S_{\bf i}\otimes S_{\bf j},
\end{equation}
where the sum ${\bf i}+{\bf j}$ is taken in the space $\N^l$. In particular,
$\NCSF^{(l)}$ is cocommutative.
\end{theorem}
We can therefore introduce the basis of products of level $l$ complete
function, labelled by $l$-compositions
\begin{equation}
S^{\bf I} = S_{{\bf i}_1} \cdots S_{{\bf i}_m},
\end{equation}
where ${\bf i}_1,\cdots,{\bf i}_m$ are the columns of $\bf I$.
\begin{theorem}
If $C$ has a group structure, $\NCSF_n^{(l)}$ becomes a subalgebra of
$\C[C\wr\SG_n]$ under the identification $\G_h \mapsto h$.
\end{theorem}
This provides an analogue of Solomon's descent algebra for the wreath
product $C\wr\SG_n$. The proof amounts to check that the Patras descent
algebra of a graded bialgebra \cite{Patras} can be adapted to $\N^l$-graded
bialgebras.
As in the case $l=1$, we define the \emph{internal product} $*$ as being
opposite to the law induced by the group algebra. It can be computed by the
following splitting formula, which is a straightforward generalization of the
level 1 version.
\begin{proposition}
Let $\mu_r: (\NCSF^{(l)})^{\otimes r} \to \NCSF^{(l)}$ be the product map.
Let $\Delta^{(r)} : (\NCSF^{(l)}) \to (\NCSF^{(l)})^{\otimes r}$ be the
$r$-fold coproduct, and $*_r$ be the extension of the internal product to
$(\NCSF^{(l)})^{\otimes r}$.
Then, for $F_1,\ldots,F_r$, and $G\in\NCSF^{(l)}$,
\begin{equation}
(F_1\cdots F_r) * G = \mu_r [ (F_1\otimes\cdots\otimes F_r) *_r
\Delta^{(r)}G ].
\end{equation}
\end{proposition}
The group law of $C$ is needed only for the evaluation of the
product of one-part complete functions $S_{\bf m}*S_{\bf n}$.
\begin{example}
With $l=2$ and $C=\Z/2\Z$,
\scriptsize
\begin{equation*}
\begin{split}
S^{\hbox{\scriptsize$\begin{pmatrix}1&0\\1&1\end{pmatrix}$}} *
S^{\hbox{\scriptsize$\begin{pmatrix}0&2\\1&0\end{pmatrix}$}} &=
\mu_2 \left[ \left(
S^{\hbox{\scriptsize$\begin{pmatrix}1\\1\end{pmatrix}$}}\otimes
S^{\hbox{\scriptsize$\begin{pmatrix}0\\1\end{pmatrix}$}} \right)
*_2
\Delta S^{\hbox{\scriptsize$\begin{pmatrix}0&2\\1&0\end{pmatrix}$}}
\right]\\
& = \left(S^{\hbox{\scriptsize$\begin{pmatrix}1\\1\end{pmatrix}$}}*
S^{\hbox{\scriptsize$\begin{pmatrix}2\\0\end{pmatrix}$}}\right)
\left(S^{\hbox{\scriptsize$\begin{pmatrix}0\\1\end{pmatrix}$}}*
S^{\hbox{\scriptsize$\begin{pmatrix}1\\0\end{pmatrix}$}}\right)
+ \left(S^{\hbox{\scriptsize$\begin{pmatrix}1\\1\end{pmatrix}$}}*
S^{\hbox{\scriptsize$\begin{pmatrix}0&1\\1&0\end{pmatrix}$}}\right)
\left(S^{\hbox{\scriptsize$\begin{pmatrix}0\\1\end{pmatrix}$}}*
S^{\hbox{\scriptsize$\begin{pmatrix}1\\0\end{pmatrix}$}}\right) \\
& = S^{\hbox{\scriptsize$\begin{pmatrix}1&1\\1&0\end{pmatrix}$}}
+ S^{\hbox{\scriptsize$\begin{pmatrix}1&1&0\\0&0&1\end{pmatrix}$}}
+ S^{\hbox{\scriptsize$\begin{pmatrix}0&0&0\\1&1&1\end{pmatrix}$}}.
\end{split}
\end{equation*}
\end{example}
Recall that the underlying colored alphabet $A$ can be seen as
$A^0 \sqcup \cdots \sqcup A^{l-1}$, with $A^i = \{ a^{(i)}_j, j\geq1 \}$.
Let ${\bf x} = (x^{(0)}, \ldots, x^{(l-1)})$, where the $x^{(i)}$ are $l$
commuting variables.
In terms of $A$, the generating function of the complete functions can be
written as
\begin{equation}
\sigma_{\bf x}(A) = \prod_{i\geq0}^{\rightarrow}
\left(1-\sum_{0\leq j\leq l-1} x^{(j)} a_{i}^{(j)} \right)^{-1}
= \sum_\npn {S_{\bf n}(A) {\bf x}^{\bf n}},
\end{equation}
where ${\bf x}^{\bf n} = (x^{(0)})^{n_0} \cdots (x^{(l-1)})^{n_{l-1}}$.
This realization gives rise to a Cauchy formula (see~\cite{NCSF2} for the
$l=1$ case), which in turn allows one to identify the dual of $\NCSF^{(l)}$ with
an algebra introduced by S. Poirier in~\cite{Poi}.
\section{Quasi-symmetric functions of level $l$}
\subsection{Cauchy formula of level $l$}
Let $X = X^0 \sqcup \cdots \sqcup X^{l-1}$, where $X^i=\{ x_j^{(i)},j\geq1\}$
be an $l$-colored alphabet of commutative variables, also commuting with $A$.
Imitating the level $1$ case (see~\cite{NCSF6}), we define the Cauchy kernel
\begin{equation}
K(X,A) = \prod_{j\geq1}^{\rightarrow}
\sigma_{\left(x_j^{(0)}, \ldots, x_j^{(l-1)}\right)} (A).
\end{equation}
Expanding on the basis $S^{\bf I}$ of $\NCSF^{(l)}$, we get as coefficients
what can be called the \emph{level $l$ monomial quasi-symmetric functions}
$M_{\bf I}(X)$
\begin{equation}
K(X,A) = \sum_{\bf I} M_{\bf I}(X) S^{\bf I}(A),
\end{equation}
defined by
\begin{equation}
M_{\bf I}(X) = \sum_{j_1<\cdots<j_m}
{\bf x}^{{\bf i}_1}_{j_1} \cdots
{\bf x}^{{\bf i}_m}_{j_m},
\end{equation}
with ${\bf I}=({\bf i}_1,\ldots,{\bf i}_m)$.
These last functions form a basis of a subalgebra $\QSym^{(l)}$ of $\K[X]$,
which we shall call the \emph{algebra of quasi-symmetric functions of level
$l$}.
\subsection{Poirier's Quasi-symmetric functions}
The functions $M_{\bf I}(X)$ can be recognized as a basis of one of the
algebras introduced by Poirier: the $M_{\bf I}$ coincide with the $M_{(C,v)}$
defined in~\cite{Poi}, p.~324, formula (1), up to indexation.
Following Poirier, we introduce the level $l$ quasi-ribbon functions by
summing over an order on $l$-compositions:
an $l$-composition $C$ is finer than $C'$, and we write $C\finer C'$, if
$C'$ can be obtained by repeatedly summing up two consecutive columns of $C$
such that no non-zero element of the left one is strictly below a non-zero
element of the right one.
This order can be described in a much easier and natural way if one recodes
an $l$-composition ${\bf I}$ as a pair of words, the first one $d({\bf I})$
being the set of sums of the elements of the first $k$ columns of $\bf I$, the
second one $c({\bf I})$ being obtained by concatenating the words
$i^{{\bf I}_{i,j}}$ while reading of $\bf I$ by columns, from top to bottom
and from left to right.
For example, the $3$-composition of Equation~(\ref{exM}) satisfies
\begin{equation}
d({\bf I}) = \{5, 10, 14, 19\} \text{\quad and\quad}
c({\bf I}) = 13333\, 22233\, 1123\, 12333\,.
\end{equation}
Moreover, this recoding is a bijection if the two words $d({\bf I})$ and
$c({\bf I})$ are such that the descent set of $c({\bf I})$ is a subset of
$d({\bf I})$.
The order previously defined on $l$-compositions is in this context the
inclusion order on sets $d$: $(d',c)\finer (d,c)$ iff $d'\subseteq d$.
It allows us to define the \emph{level $l$ quasi-ribbon functions} $F_{\bf I}$
by
\begin{equation}
F_{\bf I} = \sum_{{\bf I'}\finer {\bf I}} M_{\bf I'}.
\end{equation}
Notice that this last description of the order $\finer$ is reminiscent of the
order $\finer'$ on descent sets used in the context of quasi-symmetric
functions and non-commutative symmetric functions: more precisely, since it
does not modify the word $c({\bf I})$, the order $\finer$ restricted to
$l$-compositions of weight $n$ amounts for $l^n$ copies of the order
$\finer'$.
The computation of its M\"obius function is therefore straightforward.
Moreover, one can directly obtain the $F_{\bf I}$ as the commutative image
of certain $\F_{\sigma,u}$: any pair $(\sigma,u)$ such that $\sigma$ has
descent set $d({\bf I})$ and $u=c({\bf I})$ will do.
\section{The Mantaci-Reutenauer algebra}
Let ${\bf e}_i$ be the canonical basis of $\N^l$. For $n\geq1$, let
\begin{equation}
S_n^{(i)} = S_{n\cdot {\bf e}_i} \in\NCSF^{(l)},
\end{equation}
be the \emph{monochromatic complete symmetric functions}.
\begin{proposition}
The $S_n^{(i)}$ generate a Hopf-subalgebra $\MR^{(l)}$ of $\NCSF^{(l)}$, which
is isomorphic to the Mantaci-Reutenauer descent algebra of the wreath products
$\SG_n^{(l)} = (\Z/l\Z) \wr\SG_n$.
\end{proposition}
It follows in particular that $\MR^{(l)}$ is stable under the composition
product induced by the group structure of $\SG_n^{(l)}$.
The bases of $\MR^{(l)}$ are labelled by colored compositions (see below).
The duality is easily worked out by means of the appropriate Cauchy kernel.
The generating function of the complete functions is
\begin{equation}
\sigma^{\MR}_{\bf x}(A) := 1 + \sum_{j=0}^{l-1} \sum_{n\geq1}
S_n^{(j)}.(x^{(j)})^n,
\end{equation}
and the Cauchy kernel is as usual
\begin{equation}
K^{\MR}(X,A) = \prod_{i\geq1}^\rightarrow \sigma^{\MR}_{{\bf x}_i}(A)
= \sum_{(I,u)} M_{(I,u)}(X) S^{(I,u)}(A),
\end{equation}
where $(I,u)$ runs over colored compositions
$(I,u) = ((i_1,\ldots,i_m),(u_1,\ldots,u_m))$ that is, pairs formed with a
composition and a color vector of the same length. The $M_{I,u}$ are
called the \emph{monochromatic monomial quasi-symmetric functions} and satisfy
\begin{equation}
M_{(I,u)}(X) = \sum_{j_1<\cdots<j_m}
(x_{j_1}^{(u_1)})^{i_1} \cdots (x_{j_m}^{(u_m)})^{i_m}.
\end{equation}
\begin{proposition}
The $M_{(I,u)}$ span a subalgebra of $\C[X]$ which can be identified with the
graded dual of $\MR^{(l)}$ through the pairing
\begin{equation}
\langle M_{(I,u)}, S^{(J,v)} \rangle = \delta_{I,J} \delta_{u,v},
\end{equation}
where $\delta$ is the Kronecker symbol.
\end{proposition}
Note that this algebra can also be obtained by assuming the relations
\begin{equation}
x_i^{(p)} x_i^{(q)} = 0, \text{\ for $p\not=q$}
\end{equation}
on the variables of $\QSym^{(l)}$.
Baumann and Hohlweg have another construction of the dual of $\MR^{(l)}$
\cite{BH} (implicitly defined in~\cite{Poi}, Lemma~11).
\section{Level $l$ parking quasi-symmetric functions}
\subsection{Usual parking functions}
Recall that a \emph{parking function} on $[n]=\{1,2,\ldots,n\}$ is a word
$\park=a_1a_2\cdots a_n$ of length $n$ on $[n]$ whose nondecreasing
rearrangement $\park^\uparrow=a'_1a'_2\cdots a'_n$ satisfies $a'_i\le i$ for
all $i$.
Let $\PF_n$ be the set of such words.
It is well-known that $|\PF_n|=(n+1)^{n-1}$.
Gessel introduced in 1997 (see~\cite{Stan2}) the notion of \emph{prime parking
function}. One says that $\park$ has a \emph{breakpoint} at $b$ if
$|\{\park_i\le b\}|=b$. The set of all breakpoints of $\park$ is denoted by
$\bp(\park)$.
Then, $\park\in \PF_n$ is prime if $\bp(\park)=\{n\}$.
Let $\PPF_n\subset\PF_n$ be the set of prime parking functions on $[n]$.
It can easily be shown that $|\PPF_n|=(n-1)^{n-1}$ (see~\cite{Stan2}).
We will finally need one last notion: $\park$ has a \emph{match} at $b$ if
$|\{\park_i< b\}|=b-1$ and $|\{\park_i\leq b\}|\geq b$. The set of all matches
of $\park$ is denoted by $\match(\park)$.
We will now define generalizations of the usual parking functions to any level
in such a way that they build up a Hopf algebra in the same way as
in~\cite{NT}.
\subsection{Colored parking functions}
Let $l$ be an integer, representing the number of allowed colors.
A \emph{colored parking function} of level $l$ and size $n$ is a pair
composed of a parking function of length $n$ and a word on $[l]$ of length
$l$.
Since there is no restriction on the coloring, it is obvious that there are
$l^n (n+1)^{n-1}$ colored parking functions of level $l$ and size $n$.
It is known that the convolution of two parking functions contains only
parking functions, so one easily builds as in~\cite{NT} an algebra
$\PQSym^{(l)}$ indexed by colored parking functions:
\begin{equation}
\G_{(\park',u')} \G_{(\park'',u'')} = \sum_{\park\in \park'\convol\park''}
\G_{(\park,u'\cdot u'')}.
\end{equation}
Moreover, one defines a coproduct on the $\G$ functions by
\begin{equation}
\Delta\G_{(\park,u)} = \sum_{i\in \bp(\park)}
\G_{(\park,u)_{[1,i]}} \otimes \G_{(\park,u)_{[i+1,n]}}
\end{equation}
where $n$ is the size of $\park$ and $(\park,u)_{[a,b]}$ is the parkized
colored parking function of the pair $(\park',u')$ where $\park'$ is the
subword of $\park$ containing the letters of the interval $[a,b]$, and $u'$
the corresponding subword of $u$.
\begin{theorem}
The coproduct is an algebra homomorphism, so that $\PQSym^{(l)}$ is a
graded bialgebra. Moreover, it is a Hopf algebra.
\end{theorem}
\subsection{Parking functions of type $B$}
In~\cite{Rei}, Reiner defined non-crossing partitions of type $B$ by
analogy to the classical case. In our context, he defined the level $2$ case.
It allowed him to derive, by analogy with a simple representation theoretical
result, a definition of parking functions of type $B$ as the words on $[n]$ of
size $n$.
We shall build another set of
words, also enumerated by $n^n$ that sheds light on the relation between
type-$A$ and type-$B$ parking functions and provides a natural Hopf algebra
structure on the latter.
First, fix two colors $0<1$. We say that a pair of words $(\park,u)$ composed
of a parking function and a binary colored word is a
\emph{level $2$ parking function} if
\begin{itemize}
\item the only elements of $\park$ that can have color $1$ are the matches of
$\park$.
\item for all element of $\park$ of color $1$, all letters equal to it and to
its left also have color $1$,
\item all elements of $\park$ have at least once the color $0$.
\end{itemize}
For example, there are $27$ level $2$ parking functions of size $3$: there are
the $16$ usual ones all with full color $0$, and the eleven new elements
\begin{equation}
\begin{split}
& (111,100), (111,110), (112,100), (121,100), (211,010),\\
& (113,100), (131,100), (311,010),
(122,010), (212,100), (221,100). \\
\end{split}
\end{equation}
The first time the first rule applies is with $n=4$, where one has to discard
the words $(1122,0010)$ and $(1122,1010)$ since $2$ is not a match of $1122$.
On the other hand, both words $(1133,0010)$ and $(1133,1010)$ are
$B_4$-parking functions since $1$ and $3$ are matches of $1133$.
\begin{theorem}
The restriction of $\PQSym^{(2)}$ to the $\G$ elements indexed by level $2$
parking functions is a Hopf subalgebra of $\PQSym^{(2)}$.
\end{theorem}
\subsection{Non-crossing partitions of type $B$}
Remark that in the level $1$ case, non-crossing partitions are in bijection
with non-decreasing parking functions.
To extend this correspondence to type $B$, let us start with a non-decreasing
parking function ${\bf b}$ (with no color). We factor it into the maximal
shifted concatenation of prime non-decreasing parking functions, and we choose
a color, here 0 or 1, for each factor. We obtain in this way $\binom{2n}{n}$
words $\pi$, which can be identified with
\emph{type $B$ non-crossing partitions}.
Let
\begin{equation}
{\bf P}^{\pi}=\sum_{{\bf a}\sim{\pi}}\F_{\bf a}\,
\end{equation}
where $\sim$ denotes equality up to rearrangement of the letters.
Then,
\begin{theorem}
The ${\bf P}^{\pi}$, where $\pi$ runs over the above set of non-decreasing
signed parking functions, form the basis of a cocommutative Hopf subalgebra
of $\PQSym^{(2)}$.
\end{theorem}
All this can be extended to higher levels in a straightforward way: allow each
prime non-decreasing parking function to choose any color among $l$ and use
the factorization as above. Since non-decreasing parking functions are in
bijection with Dyck words, the choice can be described as: each block of a
Dyck word with no return-to-zero, chooses one color among $l$. In this
version, the generating series is obviously given by
\begin{equation}
\frac{1}{1- l\frac{1-\sqrt{1-4t}}{2}}.
\end{equation}
For $l=3$, we obtain the sequence A007854 of~\cite{Slo}.
\footnotesize | 0.017848 |
1880 – Request for the return of the 5% and 1000 Out of Work 1880 Index 01 – Jan 30th – Excellent Repast – Request for 5% 02 – Feb 6th – The Five Percent Question – The Sliding Scale 10 – Oct 15th & 22nd – Wages Reduction – Banksmen Strike 12 – Dec 10th – 800 men given notice 12 – Dec 17th – 1000 Hands out of Work 12 – Dec 24th – Indignation – What is the Cause? Other articles you might be interested in Strikes and Disputes 1878 to 1881 Dispute – 1000 Hands out of Work Haulage Hands Advised to Return to Work Lightning Strike – ‘Off – Three to One for Return to Work. | 0.170763 |
After recompiling from source rpm packages from gstreamer site: gstreamer-plugins-ugly-0.10.1-0.gst.1.4 gstreamer-ffmpeg-0.10.0-0.gst.2.4 gstreamer-plugins-ugly-devel-0.10.1-0.gst.1.4 and recompiling livna packages: a52dec-0.7.4-0.lvn.7.4.src.rpm djbfft-0.76-0.lvn.4.4.src.rpm lame-3.96.1-0.lvn.4.4.src.rpm libdvdcss-1.2.9-0.lvn.1.4.src.rpm libdvdnav-0.1.10-0.lvn.1.4.src.rpm libdvdplay-1.0.1-0.lvn.3.4.src.rpm libdvdread-0.9.4-0.lvn.1.4.src.rpm libmad-0.15.1-0.lvn.1.b.4.src.rpm mpeg2dec-0.4.0-0.lvn.4.b.4.src.rpm amrnb-0.0.1-0.lvn.1.src.rpm I'm able to play with latest fc5 devel totem (totem-1.3.1-1) various video files, and also divx. Trying to play dvds I get "totem was not able to play this disc. No reason" If I want to stay with totem provided and be able to play dvds where it is legal, what other requirements should I add? Or have I to install necessarily totem-xine? Gianluca | 0.000865 |
Let me introduce you to my NARS Yachiyo Kabuki Brush, which I might start calling the perfect brush.
I had been seeing this brush in some blog posts and youtube videos, but I could never find it at the NARS stand at Sephora. “Cool, another thing they’re not going to sell here!”, I always thought while browsing the whole range for the millionth time. Until one day I saw it there, waiting for me. I didn’t even look at the price (I never buy too expensive brushes though - the reason why I don’t own a single MAC face brush) and put it in the basket.
Luckily, I knew it was love from the first try: it is simply perfect.
It’s perfect for blush, as it blends not so pigmented blush and really pigmented ones in the same flawless way. I like to tap the brush on the compact and then apply the blush on my cheeks in a swirling motion, and trust me, I’ve never found applying blush flawlessly so easy.
It’s perfect for bronzer or contour, as the shape of the brush blends product itself making it easy to do a subtle contour or a lovely bronzed look.
it’s perfect for highlighter, as the tip is perfect for a precise but well blended application.
I have only one fear: I keep using a spot brush cleanser every time I use it as I’m afraid to ruin the wisteria handle under running water. I must find the courage to wrap the handle and give this brush a good wash!
To sum it up, every time I’ve used another cheek brush I’ve missed this one: an expensive purchase, but so worth it!
Have you tried this amazing brush? What is your most treasured and/or expensive brush?
Oggi vi presento il mio pennello Yachiyo NARS, che potrei anche cominciare a chiamare il pennello perfetto.
Avevo visto questo pennello in un po’ di post su blog o video su YouTube, ma non riuscivo mai a trovarlo da Sephora. “Fantastico, un’altra cosa che non hanno intenzione di vendere qui!”, pensavo sempre passando in rassegna l’intero assortimento per la millesima volta. Finché un giorno, eccolo lì ad aspettarmi. Non ho guardato neanche il prezzo (di solito non compro pennelli così cari però, infatti non posseggo nessun pennello viso MAC) e l’ho messo nel cestino.
Fortunatamente, è stato amore dal primo utilizzo.
È perfetto per il blush, visto che sfuma quelli più o meno pigmentati nello stesso modo impeccabile. Mi piace toccare leggermente la superficie del blush con la punta del pennello e poi applicarlo sulla guancia con movimenti rotatori, e credetemi, non ho mai trovato così facile applicare il blush in maniera perfetta.
È perfetto per il bronzer o per un po’ di contouring, per via della forma del pennello che fa sì che il pennello sfumi da solo il prodotto rendendo facile un contouring leggero o un bel look più dorato.
È perfetto per l’highlighter, visto che la punta è perfetta per una applicazione leggera e ben sfumata.
Ho solo una paura: continuo ad usare un detergente senza risciacquo ogni volta che lo uso visto che ho paura di rovinare il bellissimo manico sotto l’acqua corrente. Devo trovare il coraggio di avvolgere con un panno il manico e dare al pennello una bella lavata!
Per riassumere, ogni volta che ho usato un altro pennello per applicare qualunque cosa sulle guance questo mi è mancato molto: un acquisto costoso, ma ne è valsa la pena!
Avete provato questo meraviglioso pennello? Qual’è il vostro pennello più caro e/o più costoso?
Bloglovin’ | Twitter | Instagram | Facebook | Pinterest
I've seen so many Youtubers and bloggers rave about this brush, I'm absolutely dying for it! Hopefully I'll be able to pick it up with Christmas money this year!
Kelsey | kelseybeauty.blogspot.com
Hey Gyudy, I'm glad to hear you like this so much! As you know, we don't get NARS here. I have, however, got to feel one when I was in Taipei last year. While I know many LOVE this, the handle bothers me a little too much to actually splurge on it. It's just a little too thin, and like you said, how should I wash it?? My most expensive brushes are my Tom Ford ones, although I still haven't decided if I really NEEDED them. They're good, but the price tag is crazy!
This brush reminds me a bit of the Suqqu brushes Lisa Eldridge uses for blush. It looks so soft and the shape seems perfect to apply blush. Should add this to my wishlist. :) xx
Renata | Speaking Beauty UK
I really had to get used to those handles that NARS uses, but you're right, it is a lovely brush!
This brush sounds amazing! But it's so expensive I don't think I will justify the price. But I guess once in awhile everyone needs to spoil themselves with something very luxurious :)
Mummy’s Beauty Corner
I haven't tried this one but I recently hauled the ITA (after much debate). I haven't tried it yet and, honestly, I'm afraid to lest I fall in love with this expensive brush range - and based on what you've said, it's very likely. :/ Update us if you wash it, I'd be afraid to as well. Eek!
Beauty Isles | Blogiversary Giveaway: Too Faced, LORAC and NYX
It looks gorgeous, but I think I'd be afraid of washing it, just like you! Nars are doing a flat contouring brush I've been eyeing up forever as well.... xx
Sarah | seriouslyshallow.co.uk
I would so love so,e of the NARS brushes but their prices make me want to cry haha! This looks like such a stunning brush though xx
gorge shades! ahhh the palette is so pretty! the brush is so fancy!
Mine disappointed me, to be honest. All people was saying that it's soft as a bunny and so on, and mine is even a bit scratchy... don't know if I got a bad one or what. :/
Well, I'm a sucker for pretty brushes! Currently in love with my Bobbi Brown Sheer Powder brush which is my favourite allrounder for anything blush, contour or highlight - related! Happy to hear you love the brush, you should be for that price ;) xx
Hello Gyudy,
I had this brush and mine was very scratchy (going back four years ago). I gave it away and in its place decided to invest in Chikuhodo and Tom Ford brushes. I'm so happy this brush is your favorite and your guy is soft, I happen to have loved the brush shape and the handle design.
-Maria
So very exciting!! We are in this exact same stage in life and completely understand the internal struggle. Best of luck on your new adventure! Can't wait to see the beautiful home you guys.pick out... Know you will style it just perfectly.crystal globe award trophy,love you post.
I've heard so many great things about this brush, but I am so cheap... It is way out of my price range! The brush I've been using for blush is too large, though, so I definitely need a new one... My current brush can make me look like a clown if I'm not careful.
Ivory Avenue
omigosh good point about washing it....i'd be really scared too actually. thanks for pointing that out. | 0.072334 |
Welcome To Signs101.com: Largest Forum for Signmaking Professionals
Signs101.com: Largest Forum for Signmaking Professionals is the LARGEST online community & discussion forum for professional sign-makers and graphic designers.
Heat Press vinyl and custom heat transfers
Discussion in 'Materials' started by Chuck7772, May 16, 2019.
- Similar Threads - Heat Press vinyl
- Replies:
- 13
- Views:
- 932
- Replies:
- 1
- Views:
- 1,532
- Replies:
- 18
- Views:
- 459
- Replies:
- 10
- Views:
- 264
- Replies:
- 11
- Views:
- 562
XenPorta 2 PRO © Jason Axelrod from 8WAYRUN | 0.002755 |
\begin{document}
\maketitle
\renewcommand*{\thefootnote}{\arabic{footnote}}
\begin{abstract}
We study the Small Ball Probabilities (SBPs) of Gaussian rough paths. While many works on rough paths study the Large Deviations Principles (LDPs) for stochastic processes driven by Gaussian rough paths, it is a noticeable gap in the literature that SBPs have not been extended to the rough path framework.
LDPs provide macroscopic information about a measure for establishing Integrability type properties. SBPs provide microscopic information and are used to establish a locally accurate approximation for a measure. Given the compactness of a Reproducing Kernel Hilbert space (RKHS) ball, its metric entropy provides invaluable information on how to approximate the law of a Gaussian rough path.
As an application, we are able to find upper and lower bounds for the rate of convergence of an empirical rough Gaussian measure to its true law in pathspace.
\end{abstract}
{\bf Keywords:} Small Ball Probabilities, Metric Entropy, Gaussian approximation, Rough paths
\vspace{0.3cm}
\noindent
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
Small Ball Probabilites (SBPs), sometimes referred to as small deviations principles, study the asymptotic behavour of the measure of a ball of radius $\varepsilon\to 0$. Given a measure $\cL$ on a metric space $(E, d)$ with Borel $\sigma$-algebra $\cB$, we refer to the SBP around a point $x_0$ as
$$
\log\bigg(\cL\Big[ \big\{ x\in E: d(x, x_0)<\varepsilon\big\}\Big]\bigg) \qquad \varepsilon \to 0.
$$
This is in contrast to a Large Deviations Principle (LDP) which considers the asymptotic behaviour for the quantity
$$
\log\bigg(\cL\Big[ \big\{ x\in E: d(x, x_0)>a \big\}\Big]\bigg) \qquad a\to \infty.
$$
LDPs have proved to be a powerful tool for quantifying the tails of Gaussian probability distributions that have been sucessfully explored and documented in recent years, see for example \cites{bogachev1998gaussian, ledoux2013probability} and references therein. Similar results have been extended to a wide class of probability distributions, see for example \cites{varadhan1984large, DemboZeitouni2010}. However, the complexity of SBPs has meant there has been a generally slower growth in the literature. This is not to detract from their usefulness: there are many insightful and practical applications of SBPs to known problems, in particular the study of compact operators, computation of Hausdorff dimension and the rate of convergence of empirical and quantized distributions.
As a motivational example, let $\cL$ be a Gaussian measure on $\bR^d$ with mean $0$ and identity covariance matrix. Then
$$
\cL\Big[ \big\{ x\in \bR^d: |x|_2<\varepsilon\big\} \Big] = \frac{\Gamma(d/2) - \Gamma(d/2, \tfrac{\varepsilon^2}{2})}{\Gamma(d/2)} \sim \frac{2\cdot \varepsilon^d}{\Gamma(d+1) \cdot 2^{d/2} } \qquad \varepsilon \to 0.
$$
Therefore an application of l'H\^opital's rule yields
$$
\fB_{0, 2}(\varepsilon)=-\log\bigg( \cL\Big[ \big\{ x\in \bR^d: |x|_2<\varepsilon\big\} \Big]\bigg) \sim d \cdot \log(\varepsilon^{-1}) \qquad \varepsilon \to 0.
$$
Alternatively, using a different norm we have
$$
\cL\Big[ \big\{ x\in \bR^d: |x|_\infty<\varepsilon\big\} \Big] = \mbox{erf}\Big( \tfrac{\varepsilon}{\sqrt{2}}\Big)^d \sim \varepsilon^d \Big( \tfrac{2}{\pi}\Big)^{d/2} \qquad \varepsilon \to 0
$$
and we get
$$
\fB_{0, \infty}(\varepsilon)=-\log\bigg( \cL\Big[ \big\{ x\in \bR^d: |x|_\infty<\varepsilon\big\} \Big]\bigg) \sim d \cdot \log(\varepsilon^{-1}) \qquad \varepsilon \to 0.
$$
We can think of the SBPs as capturing the Lebesgue measure of a compact set (in this case a unit ball with different norms) in the support of the measure. The question then arises, what happens as the dimensions of the domain of the Gaussian measure are taken to infinity (so that there is no Lebesgue measure to compare with) and we study Gaussian measures on Banach spaces? Similarly, how does enhancing these paths to rough paths affect their properties?
\subsubsection*{Small Ball Probabilities}
Small ball probabilities encode the shape of the cumulative distribution function for a norm around 0. For a self-contained introduction to the theory of SBPs and Gaussian inequalities, see \cite{li2001gaussian}.
SBPs for a Brownian motion with respect to the H\"older norm were first studied in \cite{baldi1992some}. Using the Cielsielski representation of Brownian motion, the authors are able to exploit the orthogonality of the Schauder wavelets in the Reproducing Kernel Hlibert Space (RKHS) to represent the probability as a product of probabilities of 1 dimensional normal random variables. Standard analytic estimations of the Gauss Error function provide an upper and lower bound for the probability and an expression for the limit for the probability as $\varepsilon \to 0$.
Later, the same results were extended to a large class of Gaussian processes under different assumptions for the covariance and different choices of Banach space norms, see for example \cites{kuelbs1993small,kuelbs1995small, stolz1996some} and others.
In \cite{dobbs2014small}, the author studies some SBPs for Levy Area of Brownian motion by treating it as a time-changed Brownian motion. However, there are no works studying SBPs for rough paths.
The metric entropy of a set is a way of measuring the ``Compactness'' of a compact set. For a neat introduction to the study of entropy and some of its applications, see \cite{carl1990entropy} and \cite{edmunds2008function}. The link between SBPs for Gaussian measures on Banach spaces and metric entropy is explored in \cite{Kuelbs1993} and later extended in \cite{li1999approximation} to encompass the truncation of Gaussian measures. SBP results for integrated Brownian motion, see \cite{gao2003integrated}, were used to compute the metric entropy of $k$-monotone functions in \cite{gao2008entropy}. The link between the entropy of the convex hull of a set and the associated Gaussian measure is explored in \cites{gao2004entropy, kley2013kuelbs}. For a recent survey on Gaussian measures and metric entropy, see \cite{kuhn2019gaussian}.
There is a natural link between the metric entropy of the unit ball of the RKHS of a Gaussian measure and the quantization problem. Using the LDPs of the Gaussian measure, one can easily find a ball (in the RKHS) with measure $1-\varepsilon$ where $0<\varepsilon \ll 1$. Given the $\varepsilon$ entropy of this set, the centres of the minimal cover represent a very reasonable ``guess'' for an optimal quantization since the Gaussian measure conditioned on the closure of this set is ``close'' to uniform. For more details, see \cites{graf2003functional, dereich2003link}. Sharp estimates for Kolmogorov numbers, an equivalent measure to metric entropy, are demonstrated in \cite{luschgy2004sharp}.
More recently, SBPs have been applied to Baysian inference and machine learning, see for example \cites{van2007bayesian, vaart2011information, aurzada2009small2}.
\subsubsection*{Gaussian correlation inequalities}
A key step in the proof of many SBP results is the use of a correlation inequality to lower or upper bound a probability of the intersection of many sets by a product of the probabilities of each set. Thus a challenging probability computation can be simplified by optimising over the choice of correlation strategically.
The Gaussian correlation inequality states that for any two symmetric convex sets $A$ and $B$ in a separable Banach space and for any centred Gaussian measure $\cL$ on $E$,
$$
\cL[A\cap B] \geq \cL[A] \cL[B].
$$
The first work which considers a special case of this result was conjectured in \cite{dunnett1955approximations}, while the first formal statement was made in \cite{gupta1972inequalities}.
While the inequality remained unproven until recently, prominent works proving special examples and weaker statements include \cites{khatri1967certain,sidak1968multivariate} (who independently proved the so called \v{S}id\'ak's Lemma), \cite{pitt1977gaussian} and \cite{li1999gaussian}. The conjecture was proved in 2014 by Thomas Royen in a relatively uncirculated ArXiv submission \cite{royen2014simple} and did not come to wider scientific attention for another three years in \cite{latala2017royen}.
Put simply, the idea is to minimise a probability for a collection of normally distributed random variables by varying the correlation. Applications of these inequalities are wide ranging and vital to the theory of Baysian inference.
\subsubsection*{Rough paths and enhanced Gaussian measures}
Since their inception in \cite{lyons1998differential}, rough paths have proved a powerful tool in understanding stochastic processes. In a nut shell, the theory states that given an irregular white noise propagating a differential equation, one is required to know the path and the iterated integrals of the noise for a rich class of solutions. This path taking values on the characters of a Hopf algebra and is referred to as the signature.
An important step in the development of the theory of rough paths was the work of \cite{ledoux2002large} which studies the LDPs of an enhanced Brownian motion, the so-called lift of the path of a Brownian motion to its signature. The authors prove a Large Deviations Principle and a support theorem for the law of the enhanced Brownian motion as a measure over the collection of rough paths with respect to the rough path metric. Then, by the continuity of the It\^o-Lyons map the LDP can be extended to the solution of any rough differential equation driven by the enhanced Brownian motion.
Originally, rough paths were used to give a pathwise meaning to the solutions of stochastic differential equations where previously only a probabilistic meaning was known. However, there are an increasing number of works that study measures on the collection of rough paths motivated by the study of systems of interacting particles.
In general, the study of measures over rough paths has been focused on the macroscopic properties. This was natural given the signature contains more information than the path on its own and it is not immediately clear that this extra information does not render the objects non-integrable. Questions of integrability of rough paths were addressed in \cites{friz2010generalized,cass2013integrability}. These were used to study rough differential equations that depend on their own distribution, the so called McKean-Vlasov equations in \cite{CassLyonsEvolving}. More recently, there has been a rapid expansion of this theory, see \cites{coghi2018pathwise, 2019arXiv180205882.2B, cass2019support}. Of particular interest to this work is \cite{deuschel2017enhanced} which studies the convergence of the empirical measure obtained by sampling $n$ enhanced Brownian motions to the law of an enhanced Brownian motion.
The author was unable to find material in the literature pertaining to the microscopic properties of distributions over the collection of rough paths. This work came out of a need to better understand interacting particle systems driven by Gaussian noises, although we emphasise that no results in this paper need be restricted to that framework.
\subsubsection*{Our contributions}
The structure of this paper is as follows: Firstly, we introduce necessary material and notations in Section \ref{section:Prelim}. In order to extend the theory of Gaussian measures on Banach spaces to the framework of rough paths, we need to rephrase several well known Gaussian inequalities and prove new correlation inequalites. This is done in Section \ref{section:EnGaussInequal}. While technical, these results are stronger than we require and represent an extension of the theory of correlation inequalities to elements of the Wiener It\^o chaos expansion.
The main contribution of this work is the computation of SBPs for Gaussian rough paths with the rough path H\"older metric. These results are solved in Section \ref{section:SmallBallProbab}. We remark that the discretisation of the H\"older norm in Lemma \ref{lemma:DiscretisationNorm} was unknown to the author and may be of independent interest for future works on rough paths.
Finally, Sections \ref{section:MetricEntropy} and \ref{section:OptimalQuant} are applications of Theorem \ref{Thm:SmallBallProbab} following known methods that are adapted to the rough path setting. Of particular interest are Theorems \ref{thm:QuantizationRateCon} and \ref{thm:EmpiricalRateCon1}
which provide an upper and lower bound for the rate of convergence for the empirical rough Gaussian measure.
\section{Preliminaries}
\label{section:Prelim}
We denote by $\bN=\{1,2,\cdots\}$ the set of natural numbers and $\bN_0=\bN\cup \{0\}$, $\bZ$ and $\bR$ denote the set of integers and real numbers respectively. $\bR^+=[0,\infty)$. By $\lfloor x \rfloor$ we denote the largest integer less than or equal to $x\in \bR$. $\1_A$ denotes the usual indicator function over some set $A$. Let $e_j$ be the unit vector of $\bR^d$ in the $j^{th}$ component and $e_{i, j} = e_i \otimes e_j$ be the unit vector of $\bR^d \otimes \bR^d$.
For sequences $(f_n)_{n\in \bN}$ and $(g_n)_{n\in\bN}$, we denote
\begin{align*}
f_n \lesssim g_n \ \ \iff \ \ \limsup_{n\to \infty} \frac{f_n}{g_n}\leq C,
\qquad \textrm{and}\qquad
f_n \gtrsim g_n \ \ \iff \ \ \liminf_{n\to \infty} \frac{f_n}{g_n}\geq C.
\end{align*}
where $C$ is a positive constant independent of the limiting variable. When $f_n \lesssim g_n$ and $f_n \gtrsim g_n$, we say $f_n \approx g_n$. This is distinct from
\begin{align*}
f_n \sim g_n \ \ \iff \ \ \lim_{n\to \infty} \frac{f_n}{g_n} = 1.
\end{align*}
We say that a function $L:(0,\infty) \to (0, \infty)$ is \emph{slowly varying at infinity} if $\forall s>0$
$$
\lim_{t\to \infty} \frac{L(st)}{L(t)} = 1.
$$
A function $x \mapsto \phi(1/x)$ is called \emph{regularly varying at infinity} with index $a>0$ if there exists a function $L$ which is slowly varying at infinity such that
$$
\phi(\varepsilon) = \varepsilon^{-a} L \big(\tfrac{1}{\varepsilon}\big).
$$
\subsection{Gaussian Theory}
\begin{definition}
Let $E$ be a separable Banach space equipped with its cylinder $\sigma$-algebra $\cB$. A Gaussian measure $\cL$ is a Borel probability measure on $(E, \cB)$ such that the pushforward measure of each element of the dual space $E^*$ is a Gaussian random variable. Thus the measure $\cL$ is uniquely determined in terms of the covariance bilinear form $\cR:E^* \times E^* \to \bR$ by
$$
\cR[f, g]:= \int_E f(x) \cdot g(x) d\cL(x).
$$
The covariance Kernel, $\cS:E^* \to E$ is defined in terms of the Pettis integral
$$
\cS[f]:= \int_E x \cdot f(x) d\cL(x).
$$
Denote by $\cH$ the Hilbert space obtained by taking the closure of $E^*$ with respect to the inner product induced by the form $\cR$. The covariance kernel has spectral representation $\cS= i i^*$ where $i$ is the compact embedding of $\cH$ into $E$.
We refer to $\cH$ as the Reproducing Kernel Hilbert Space (RKHS) of Gaussian measure $\cL$.
When the Hilbert space $\cH$ is dense in the Banach space $E$, the triple $(E, \cH, i)$ is called an Abstract Wiener space.
\end{definition}
We denote the unit ball in the RKHS norm as $\cK$. It is well known that the set $\cK$ is compact in the Banach space topology.
\begin{proposition}[Borell's Inequality]
\label{pro:BorellInequal}
Let $\Phi(x):=\int_{-\infty}^x \frac{1}{\sqrt{2\pi}} \exp( -y^2/2) dy$. Let $\cL$ be a Gaussian measure on a separable Banach space $E$. Let $\cK:=\{h\in \cH: \|h\|_\cH\leq1\}$ and let $A$ be a Borel subset of $E$. Then
$$
\cL_*( A + t\cK ) \geq \Phi\Big( t + \Phi^{-1}( \cL(A))\Big)
$$
where $\cL_*$ is the inner measure of $\cL$ and is chosen to avoid measurability issues with the set $A + t\cK$.
\end{proposition}
The proof can be found in \cite{ledoux1996isoperimetry}.
\subsection{Rough paths}
\label{subsec:RoughPaths}
Throughout this paper, we will use the notation for increments of a path $X_{s, t} = X_t - X_s$ for $s\leq t$. Rough paths were first introduced in \cite{lyons1998differential}.
We will be most closely following \cite{frizhairer2014}.
\subsubsection{Algebraic material}
We denote $T^{(2)}(\bR^d)$ to be the tensor space $\bR \oplus \bR^d \oplus ( \bR^d \otimes \bR^d)$. This has a natural vector space structure along with a non-commutative product $\boxtimes$ with unit $(1, 0,0)$ defined by
$$
(a, b, c) \boxtimes (a', b'. c') = (a\cdot a', a\cdot b' + a' \cdot b, a\cdot c' + a'\cdot c + b \otimes b').
$$
For $i=0,1,2$, we denote the canonical projection $\pi_i: T^{(2)}(\bR^d) \to (\bR^d)^{\otimes i}$.
The subset $G^{(2)}(\bR^d) = \{ (a, b, c) \in T^{(2)}(\bR^d): a=1\}$ forms a non-commutative Lie group with group operation $\boxtimes$ and inverse $(1, b, c)^{-1} = (1, -b, -c + b\otimes b)$. This turns out to be the step-2 nilpotent Lie group with $d$ generators.
The subset $\fL^{(2)} (\bR^d) = \{ (a, b, c) \in T^{(2)}(\bR^d): a=0\}$ forms a Lie algebra with the Lie brackets
$$
[\fl_1, \fl_2]_{\boxtimes} = \fl_1 \boxtimes \fl_2 - \fl_2 \boxtimes \fl_1.
$$
There exist bijective diffeomorphisms between $\fL^{(2)}(\bR^d)$ and $G^{(2)}(\bR^d)$ called the exponential map $\exp_\boxtimes :\fL^{(2)}(\bR^d) \to G^{(2)}(\bR^d)$ and logarithm map $\log_{\boxtimes}: G^{(2)}(\bR^d) \to \fL^{(2)}(\bR^d)$ defined by
$$
\exp_\boxtimes\Big( (0,b,c) \Big) = (1, b, c +\tfrac{1}{2} b\otimes b),
\quad
\log_\boxtimes\Big( (1, b, c) \Big) = (0, b, c - \tfrac{1}{2} b\otimes b).
$$
We define the dilation $(\delta_t)_{r>0}$ on the Lie algebra $\fL^{(2)}(\bR^d)$ to be the collection of automorphisms of $\fL^{(2)}(\bR^d)$ such that $\delta_s \delta_t = \delta_{st}$. The dilation can also be extended to the Lie group by considering $\delta_t[\log_\boxtimes]$. A homogenous group is any Lie group whose Lie algebra is endowed with a family of dilations.
A homogeneous norm on a homogeneous group $G$ is a continuous function $\|\cdot\|:G \to \bR^+$ such that $\| g\|=0 \iff g=\rId$ and $\| \delta_t[g]\| = |t|\cdot \| g\|$. A homogeneous norm is called subadditive if $\| g_1\boxtimes g_2\|\leq \| g_1\| + \|g_2\|$ and called symmetric if $\|g^{-1}\| = \|g\|$.
When a homogenous norm is subadditive and symmetric, it induces a left invariant metric on the group called the Carnot-Caratheodory metric which we denote $d_{cc}$. We will often write
$$
\| \rx \|_{cc} = d_{cc}( \rId, \rx)
$$
All homogeneous norms are equivalent, so we will often shift between homogeneous norms that are most suitable for a given situation.
Examples of a homogeneous norm include
\begin{equation}
\label{eq:HomoNorm}
\| g \|_{G^{(2)}} = \sum_{A \in \cA_2} \Big| \big\langle \log_\boxtimes (g), e_A \big\rangle \Big|^{1/|A|},
\qquad
\| g \|_{G^{(2)}} = \sup_{A \in \cA_2} \Big| \big\langle \log_\boxtimes (g), e_A \big\rangle \Big|^{1/|A|}.
\end{equation}
where $\{ e_A: A\in \cA_2\}$ is a basis of the vector space $T^{(2)}(\bR^d)$.
\subsubsection{Geometric and weak geometric rough paths}
\begin{definition}
Let $\alpha\in (\tfrac{1}{3}, \tfrac{1}{2}]$. A path $\rx:[0,T] \to G^2(\bR^d)$ is called an $\alpha$-rough paths if for all words $A \in \cA_2$,
\begin{align}
\label{eq:def:rough-paths}
\rx_{s, u} = \rx_{s, t} \boxtimes \rx_{t, u}
\quad \mbox{and} \quad
\sup_{A \in \cA_2} \sup_{s, t\in [0,T]}& \frac{\langle \rx_{s, t}, e_A \rangle }{|t-s|^{\alpha |A|}}< \infty.
\end{align}
In, in addition, we have that $\rx$ satisfies
\begin{equation}
\label{eq:def:rough-paths2}
\mbox{Sym}\Big( \pi_2 \big[ \rx_{s, t} \big] \Big) = \tfrac{1}{2} \pi_1\big[ \rx_{s,t}\big] \otimes \pi_1\big[ \rx_{s,t} \big],
\end{equation}
then we say $\rx$ is a weakly geometric rough path. The set of weakly geometric rough paths is denoted $WG\Omega_\alpha(\bR^d)$.
\end{definition}
The first relation of Equation \eqref{eq:def:rough-paths} is often called the algebraic \emph{Chen's relation} and the second is referred to as the analytic \emph{regularity condition}.
\begin{definition}
For $\alpha$-rough paths $\rx$ and $\ry$, we denote the $\alpha$-H\"older rough path metric to be
\begin{equation}
\label{eq:HolderDef}
d_\alpha( \rx, \ry) = \| \rx^{-1} \boxtimes \ry\|_\alpha = \sup_{s, t\in[0,T]} \frac{\Big\| \rx_{s,t}^{-1}\boxtimes \ry_{s,t} \Big\|_{cc} }{|t-s|^\alpha}.
\end{equation}
By quotienting with respect to $\rx_0$, one can make this a norm. We use the convention that $\| \rx \|_{\alpha} = d_\alpha(\rId, \rx )$.
\end{definition}
\begin{definition}
For a path $x\in C^{1-var}([0,T]; \bR^d)$, the iterated integrals of $x$ are canonically defined using Young integration. The collection of iterated integrals of the path $x$ is called the truncated signature of $x$ and is defined as
$$
S(x)_{s, t}:= \rId + \sum_{n=1}^\infty \int_{s\leq u_1\leq ... \leq u_n\leq t} dx_{u_1} \otimes ... \otimes dx_{u_n} \in T(\bR^d).
$$
In the same way, the truncated Signature defined by its increments
$$
S_2(x)_{s, t}:= \rId + x_{s, t} + \int_{s\leq u_1\leq u_2\leq t} dx_{u_1} \otimes dx_{u_2} \in T^{(2)}(\bR^d).
$$
The closure of the set $\{ S_2(x): x \in C^{1-var}([0,T], \bR^d)\}$ with respect to the $\alpha$-H\"older rough path metric is the collection of \emph{geometric rough paths} which we denote by $G\Omega_\alpha(\bR^d)$.
It is well known that $G\Omega_\alpha(\bR^d) \subsetneq WG\Omega_\alpha(\bR^d)$.
\end{definition}
\subsubsection{The translation of rough paths}
We define the map $\#: G^2(\bR^d \oplus \bR^d) \to G^2(\bR^d)$ to be the unique homomorphism such that for $v_1, v_2\in \bR^d$, $\#[ \exp_\boxtimes(v_1 \oplus v_2)] = \exp_\boxtimes(v_1+v_2)$.
\begin{definition}
\label{dfn:TranslatedRoughPath}
Let $\alpha, \beta>0$ such that $\alpha+\beta>1$ and $\alpha \in (\tfrac{1}{3}, \tfrac{1}{2}]$. Let $(\rx, h)\in C^{\alpha}([0,T]; G^{2}(\bR^d)) \times C^{\beta}([0,T]; \bR^d)$. We define the \emph{translation of the rough path} $\rx$ by the path $h$, denoted $T^h(\rx)\in C^{\alpha}([0,T]; G^{2}(\bR^d))$ to be
$$
T^h(\rx) = \#\Big[ S_{2}(\rx\oplus h)\Big]
$$
\end{definition}
\begin{lemma}
\label{lemma:RPTranslation1}
Let $\alpha\in ( \tfrac{1}{3}, \tfrac{1}{2}]$ and let $\rx\in WG\Omega_{\alpha}(\bR^d)$. Let $p=\tfrac{1}{\alpha}$ and let $q>1$ such that $1= \tfrac{1}{p} + \tfrac{1}{q}$. Let $h:[0,T] \to \bR^d$ satisfy that
$$
\| h\|_{q, \alpha, [0,T]}:= \sup_{t, s\in[0,T]} \frac{ \| h\|_{q-var, [s, t]}}{|t-s|^{\alpha}} <\infty.
$$
Then $T^h( \rx)\in WG\Omega_\alpha(\bR^d)$ and there exists $C=C(p, q)>0$ such that
\begin{equation}
\label{eq:RPTranslation1}
\| T^h( \rx)\|_\alpha \leq C \Big( \| \rx\|_\alpha + \| h\|_{q, \alpha, [0,T]} \Big)
\end{equation}
\end{lemma}
The proof of Lemma \ref{lemma:RPTranslation1} is a simple adaption of the ideas found in the proof of \cite{cass2013integrability}*{Lemma 3.1} and was originally set as an exercise in \cite{frizhairer2014}.
\begin{proof}
By construction from Definition \ref{dfn:TranslatedRoughPath}, we have
\begin{align}
\nonumber
\Big\langle T^h(\rx)_{s, t}, e_{i, j} \Big\rangle =& \Big\langle \rx_{s, t}, e_{i, j} \Big\rangle
+ \int_s^t \big\langle X_{s, r}, e_i \big\rangle \cdot d\big\langle h_r, e_j\big\rangle
+ \int_s^t \big\langle h_{s, r}, e_i \big\rangle \cdot d\big\langle X_r, e_j\big\rangle
\\
\label{eq:lemma:RPTranslation1-pf1.1}
&+ \int_s^t \big\langle h_{s, r}, e_i \big\rangle \cdot d\big\langle h_r, e_j\big\rangle.
\end{align}
Each integral of Equation \eqref{eq:lemma:RPTranslation1-pf1.1} can be defined using Young integration, so that we have
\begin{align*}
\int_s^t \big\langle X_{s, r}, e_i \big\rangle \cdot d\big\langle h_r, e_j\big\rangle \leq& C \| X \|_{p-var, [s,t]} \cdot \| h \|_{q-var, [s, t]}
\\
\int_s^t \big\langle h_{s, r}, e_i \big\rangle \cdot d\big\langle X_r, e_j\big\rangle \leq& C \| X \|_{p-var, [s,t]} \cdot \| h \|_{q-var, [s, t]}
\\
\int_s^t \big\langle h_{s, r}, e_i \big\rangle \cdot d\big\langle h_r, e_j\big\rangle \leq& C \| h \|_{q-var, [s,t]} \cdot \| h \|_{q-var, [s, t]}
\end{align*}
for some uniform constant $C>0$ dependent on $p$ and $q$. Combining these with an equivalent homogeneous norm \eqref{eq:HomoNorm} implies \eqref{eq:RPTranslation1}.
Finally, to verify Equation \eqref{eq:def:rough-paths2}, we recall that the Young integrals satisfy an integration by parts formula so that
\begin{align*}
\mbox{Sym}\Big( \pi_2[T^h(\rx)_{s, t}] \Big) =& \mbox{Sym}\Big( \pi_2[\rx_{s, t}] \Big) + \mbox{Sym}\Big( \int_s^t X_{s, r} \otimes dh_r \Big) + \mbox{Sym}\Big( \int_s^t h_{s, r} \otimes dX_r \Big)
\\
&+ \mbox{Sym}\Big( \int_s^t h_{s, r} \otimes dh_r \Big)
\\
=&\tfrac{1}{2} \bigg( \pi_1\big[ \rx_{s,t}\big] \otimes \pi_1\big[ \rx_{s,t} \big] + X_{s, t} \otimes h_{s, t} + h_{s, t} \otimes X_{s, t} + h_{s, t} \otimes h_{s, t} \bigg)
\\
=& \tfrac{1}{2} \pi_1\big[ T^h[\rx_{s,t}] \big] \otimes \pi_1\big[ T^h[\rx_{s,t}] \big].
\end{align*}
\end{proof}
\begin{lemma}
\label{lem:RPTranslation}
The homogeneous rough path metric $d_\alpha$ is $T^h$-invariant.
\end{lemma}
\begin{proof}
Using that $\#$ is a Group homomorphism, we have
\begin{align*}
\Big( T^h(\rx)_{s, t} \Big)^{-1} \boxtimes T^h(\ry)_{s, t} =& \Big( \#\Big[ S_{2}(\rx \oplus h)\Big] \Big)_{s, t}^{-1} \boxtimes \# \Big[ S_{2}( \ry \oplus h)\Big]
\\
=&\#\Big[ S_{2}(\rx \oplus h)_{s, t}^{-1} \boxtimes S_{2}(\ry\oplus h)_{s, t}\Big]
\\
=&\#\Big[ S_{2}\Big( \rx^{-1} \boxtimes \ry \oplus 0\Big)_{s, t}\Big]
= \rx^{-1}_{s, t} \boxtimes \ry_{s, t}.
\end{align*}
Thus
\begin{align*}
d\Big( T^h(\rx)_{s, t}, T^y(\ry)_{s, t}\Big) =& \Big\| \big( T^h(\rx)_{s, t} \big)^{-1} \boxtimes T^h(\ry)_{s, t} \Big\|
\\
=&\Big\| \rx^{-1}_{s, t} \boxtimes \ry_{s, t} \Big\|
= d\Big( \rx_{s, t}, \ry_{s, t}\Big).
\end{align*}
\end{proof}
\subsubsection{The lift of a Gaussian process}
Gaussian processes have a natural lift for their signature. It is shown in \cite{frizhairer2014} that one can solve the iterated integral of a Gaussian process by approximating the process pathwise and showing that the approximation converges in mean square and almost surely. In particular, the iterated integral of a Gaussian process is an element on the second Wiener-It\^o chaos expansion.
\begin{assumption}
\label{assumption:GaussianRegularity}
Let $\cL^W$ be the law of a $d$-dimensional, continuous centred Gaussian process with independent components and covariance covariance operator $\cR$. We denote
\begin{align*}
&\cR^{W} \begin{pmatrix}s,& t\\ u,& v\end{pmatrix} = \bE\Big[ W_{s, t} \otimes W_{u, v} \Big] \in \bR^d \otimes \bR^d,
\quad
\cR^{W}_{s, t} = \bE\Big[ W_{s, t} \otimes W_{s, t} \Big].
\\
&\| \cR^W \|_{\varrho-var; [s,t]\times [u, v]} = \Bigg( \sup_{\substack{D \subseteq [s, t]\\ D' \subseteq [s',t']}} \sum_{\substack{i; t_i \in D \\ j; t'_j \in D'}} \Big| \cR\begin{pmatrix}t_i,& t_{i+1} \\ t_{i}',& t_{i+1}'\end{pmatrix} \Big|^{\rho} \Bigg)^{\tfrac{1}{\varrho}}.
\end{align*}
We assume that $\exists \varrho \in[1, 3/2)$, $\exists M<\infty$ such that
$$
\| \cR^W \|_{\varrho-var; [s, t]^{\times 2}} \leq M \cdot |t-s|^{1/ \varrho}.
$$
\end{assumption}
Under Assumption \ref{assumption:GaussianRegularity}, it is well known that one can lift a Gaussian process to a Gaussian rough path taking values in $WG\Omega_\alpha(\bR^d)$.
\newpage
\section{Enhanced Gaussian inequalities}
\label{section:EnGaussInequal}
Let $\bB_\alpha(\rh, \varepsilon):= \{ \rx \in WG\Omega_\alpha (\bR^d): d_{\alpha}( \rh, \rx)<\varepsilon\}$. In this section, we prove a series of inequalities of Gaussian measures that we will use when proving the small ball probability results of Section \ref{section:SmallBallProbab}.
\subsection{Translation Inequalities}
\begin{lemma}[Anderson's inequality for Gaussian rough paths]
\label{lem:AndersonInequalityRP}
Let $\cL^W$ be a Gaussian measure and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Then $\forall \rx\in WG\Omega_{\alpha} (\bR^d)$
\begin{equation}
\label{eq:AndersonInequalityRP}
\cL^\rw \Big[ \bB_\alpha(\rx, \varepsilon) \Big] \leq \cL^\rw\Big[ \bB_\alpha( \rId, \varepsilon) \Big].
\end{equation}
\end{lemma}
\begin{proof}
See for instance \cite{lifshits2013gaussian}.
\end{proof}
\begin{lemma}[Cameron-Martin formula for rough paths]
\label{lem:CamMartinFormRP}
Let $\cL^W$ be a Gaussian measure satisfying Assumption \ref{assumption:GaussianRegularity} and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Let $h\in \cH$ and denote $\rh=S_2[h]$. Then
$$
\cL^\rw\Big[ \bB_\alpha(\rh, \varepsilon) \Big] \geq \exp\left( \frac{-\|h\|_\cH^2}{2} \right) \cL^\rw \Big[ \bB_\alpha(\rId, \varepsilon) \Big]
$$
\end{lemma}
\begin{proof}
Using that the map $W\in C^{\alpha, 0}([0,T]; \bR^d) \mapsto \rw \in WG\Omega_\alpha(\bR^d)$ is measurable, we define the pushforward measure $\cL^\rw = \cL^W \circ \rw^{-1}$.
We observe that the set $\{ y\in E: d_\alpha(\rw(y), 1)<\varepsilon\}$ is symmetric around $0$. Applying Lemma \ref{lem:RPTranslation} and the Cameron-Martin transform (see \cite{kuelbs1994gaussian}*{Theorem 2}),
\begin{align*}
\cL^\rw \Big[ \bB_\alpha(\rh, \varepsilon)\Big] =& \cL^W\Big[ \{ x\in E: d_\alpha( \rw(x), \rh )<\varepsilon \} \Big] =\cL^W\Big[ \{ x\in E: d_\alpha( T^{-h} (\rw(x)), T^{-h}(\rh) )<\varepsilon \} \Big]
\\
=& \cL^W\Big[ \{ x\in E: d_\alpha( \rw(x-h), \rId )<\varepsilon \} \Big] = \cL^W\Big[ \{ y\in E: d_\alpha( \rw(y), \rId )<\varepsilon \}+h \Big]
\\
\geq& \exp\Big( \frac{-\|h\|_\cH^2}{2}\Big) \cL^W\Big[ \{ y\in E: d_\alpha( \rw(y), \rId )<\varepsilon \} \Big]
\\
\geq& \exp\Big( \frac{-\|h\|_\cH^2}{2}\Big) \cL^\rw \Big[ \bB_\alpha(\rId, \varepsilon) \Big].
\end{align*}
\end{proof}
\begin{definition}
\label{definition:Freidlin-Wentzell_Function}
Let $\cL^W$ be a Gaussian measure and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. We define the Freidlin-Wentzell Large Deviations rate function by
\begin{align*}
I^\rw(\rx):=& \begin{cases}
\frac{\| \pi_1(\rx)\|_\cH^2}{2}, & \mbox{ if } \pi_1(\rx) \in \cH \\
\infty, & \mbox{otherwise.}
\end{cases}
\\
I^\rw(\rx, \varepsilon):=& \inf_{d_{\alpha; [0,T]}(\rx, \ry) <\varepsilon} I(\ry)
\end{align*}
\end{definition}
The following Corollary is similar to a result first proved in \cite{li2001gaussian} for Gaussian measures.
\begin{corollary}
\label{cor:CamMartinFormRP}
Let $\cL^W$ be a Gaussian measure satisfying Assumption \ref{assumption:GaussianRegularity} and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Then for $\ry \in \overline{S_2(\cH)}^{d_\alpha}$ and $a\in[0,1]$
$$
\cL^\rw \Big[ \bB_\alpha(\ry, \varepsilon)\Big] \geq \exp\Big( I^\rw(\ry, a\varepsilon)\Big) \cL^\rw\Big[ \bB_\alpha \big(\rId, (1-a)\varepsilon\big)\Big]
$$
\end{corollary}
\begin{proof}
Using that the lift of the RKHS is dense in the support of the Gaussian rough path, we know that there must exist at least on $h\in \cH$ such that $d_\alpha( \rh, \ry)<a\varepsilon$ for any choice of $a\in[0,1]$. Further, by nesting of sets
$$
\cL^\rw\Big[ \bB_\alpha(\ry, \varepsilon) \Big] \geq \cL^\rw\Big[ \bB_\alpha\big(\rh, (1-a)\varepsilon\big) \Big].
$$
Now apply Lemma \ref{lem:CamMartinFormRP} and take a minimum over all possible choices of $\rh$.
\end{proof}
Finally, we recall a useful inequality stated and proved in \cite{cass2013integrability}.
\begin{lemma}[Borell's rough path inequality]
\label{lem:BorellInequalRP}
Let $\cL^W$ be a Gaussian measure and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Let $\cK\subset \cH$ be the unit ball with respect to $\| \cdot \|_\cH$ and denote $\rk=\{ \rh=S_2[h]: h\in \cH\}$.
Let $A$ be a Borel subset of $WG\Omega_\alpha (\bR^d)$, $\lambda>0$ and define
$$
T[A,\delta_\lambda(\rk)]:= \Big\{ T^{h}( \rx): \rx\in A, \rh\in \delta_\lambda(\rk) \Big\}.
$$
Then, denoting by $\cL_*^\rw$ the inner measure, we have
$$
\cL_*^\rw\Big[ A+ \delta_\lambda (\rk) \Big] \geq \Phi\Big( \lambda + \Phi^{-1}( \cL^\rw[A])\Big).
$$
\end{lemma}
\subsection{Gaussian correlation inequalities}
Given an abstract Wiener space $(E, \cH, i)$, we consider the element $h\in \cH$ as a random variable on the probability space $(E, \cB(E), \cL)$ where $\cB(E)$ is the cylindrical $\sigma$-algebra generated by the elements of $E^*$. When $E$ is separable, $\cB(E)$ is equal to the Borel $\sigma$-algebra.
The following Lemma can be found in \cite{stolz1996some}*{Lemma 3.2}. We omit the proof.
\begin{lemma}
\label{lemma:Stolz_Lem}
Let $(d_k)_{k \in \bN_0}$ be a non-negative sequence such that $d_0 = 1$ and
$$
\sum_{k=1}^\infty d_k = d <\infty.
$$
Let $(E, \cH, i)$ be an abstract Wiener space with Gaussian measure $\cL$. Let $(h_i)_{i=1, ..., n} \in \cH$ such that
$$
\| h_i\|_{\cH} = 1,
\quad
\Big| \big\langle h_i , h_j \big\rangle_\cH \Big| \leq d_{|i-j|}.
$$
Then $\exists M_1, M_2>0$ depending on $d$ such that $\forall n\in \bN$,
$$
\bP\bigg[ \tfrac{1}{n} \sum_{k=1}^n |h_i| \leq \frac{1}{M_1} \bigg] \leq \exp\Big( \tfrac{-n}{M_2} \Big).
$$
\end{lemma}
This next Lemma, often referred to as \v{S}id\'ak's Lemma, was proved independently in \cites{sidak1968multivariate} and \cite{ khatri1967certain}.
\begin{lemma}
\label{Lemma:Sidak1}
Let $(E, \cH, i)$ be an abstract Wiener space with Gaussian measure $\cL$ and let $I$ be a countable index. Suppose $\forall j\in I$ that $h_j\in \cH$ and $\varepsilon_j>0$. Then for any $j\in I$
\begin{equation}
\label{eq:Lemma:Sidak1.1}
\cL\bigg[ \bigcap_{i\in I} \Big\{|h_i|<\varepsilon_i\Big\} \bigg] \geq \cL\bigg[ \Big\{ |h_j|<\varepsilon_j \Big\}\bigg] \cL\bigg[ \bigcap_{i\in I\backslash\{j\}} \Big\{ |h_i|<\varepsilon_i\Big\} \bigg].
\end{equation}
Equivalently, for $h'_k\in \cH$ such that $\| h_k\|_\cH = \| h'_k\|_\cH$ and $\forall j\in I\backslash \{k\}, \langle h_j, h'_k\rangle_\cH = 0$ then
\begin{equation}
\label{eq:Lemma:Sidak1.2}
\cL\bigg[ \bigcap_{j\in I} \Big\{|h_j|<\varepsilon_j\Big\} \bigg] \geq \cL\bigg[ \bigcap_{j\in I\backslash\{k\}} \Big\{ |h_j|<\varepsilon_j\Big\} \cap \Big\{ |h'_k|<\varepsilon_k\Big\} \bigg].
\end{equation}
\end{lemma}
For an eloquent proof, see \cite{bogachev1998gaussian}*{Theorem 4.10.3}. In particular, given a Gaussian process $W$, a countable collection of intervals $(s_j, t_j)_{j\in I}$ and bounds $(\varepsilon_j)_{j\in I}$, we have
\begin{align*}
\bP\bigg[ \bigcap_{j\in I} \Big\{ | W_{s_j, t_j}|<\varepsilon_j \Big\} \bigg] \geq& \bP\bigg[ \Big\{ | W_{s_1, t_1}|<\varepsilon_1 \Big\} \bigg] \cdot \bP\bigg[ \bigcap_{\substack{j\in I\\ j\neq 1}} \Big\{ | W_{s_j, t_j}|<\varepsilon_j \Big\} \bigg]
\\
\geq& \prod_{j\in I} \bP\bigg[ \Big\{ | W_{s_j, t_j}|<\varepsilon_j \Big\} \bigg].
\end{align*}
Thus the probability of a sequence of intervals of a Gaussian process sitting on slices is minimised when the Gaussian random variables are all independent.
This is an example of the now proved Gaussian corellation conjecture (first proved in \cite{royen2014simple}) which states
\begin{equation}
\label{eq:GaussCorrConj}
\bP\bigg[ \bigcap_{j\in I} \Big\{ | W_{s_j, t_j}|<\varepsilon_j \Big\} \bigg] \geq \bP\bigg[ \bigcap_{j\in I_1} \Big\{ | W_{s_j, t_j}|<\varepsilon_j \Big\} \bigg] \cdot \bP\bigg[ \bigcap_{j\in I_2} \Big\{ | W_{s_j, t_j}|<\varepsilon_j \Big\} \bigg]
\end{equation}
where $I_1 \cup I_2 = I$ and $I_1 \cap I_2 =\emptyset$.
Given a pair of abstract Wiener spaces $(E_1, \cH_1, i_1)$ and $(E_2, \cH_2, i_2)$, we can define a Gaussian measure on the Cartesian product $E_1 \oplus E_2$ which has RKHS $\cH_1 \oplus \cH_2$ by taking the product measure $\cL_1 \times \cL_2$ over $(E_1 \oplus E_2, \cB(E_1) \otimes \cB(E_2)) $.
We define the tensor space $E_1 \otimes_\varepsilon E_2$ of $E_1$ and $E_2$ to be the closure of the algebraic tensor $E_1 \otimes E_2$ with respect to the injective tensor norm
$$
\varepsilon(x):= \sup\Big\{ | (f \otimes g)(x)|: f\in E_1^*, g\in E_2^*, \|f\|_{E_1^*} = \| g\|_{E_2^*} = 1\Big\}.
$$
Let $f\in (E_1 \otimes_\varepsilon E_2)^*$. Then the map $E_1 \oplus E_2 \ni (x, y) \mapsto f(x\otimes y)$ is measurable and the pushforward of $f$ with respect to the Gaussian measure is an element of the second Wiener It\^o chaos. In the case where the tensor product is of two Hilbert spaces, there is no question over the choice of the norm for $\cH_1 \otimes \cH_2$.
A problem similar to this was first studied in \cite{ledoux2002large}. We emphasise that our result is a lot more general.
\begin{lemma}
\label{Lemma:Sidak2}
Let $(E_1, \cH_1, i_1)$ and $(E_2, \cH_2, i_2)$ be abstract Wiener spaces with Gaussian measures $\cL_1$ and $\cL_2$. Let $\cL_1 \times \cL_2$ be the product measure over the direct sum $E_1 \oplus E_2$. Let $I_1, I_2, I_3$ be countable indexes. Suppose that $\forall j\in I_1, h_{j,1} \in \cH_1$ and $\varepsilon_{j,1}>0$, $\forall j\in I_2, h_{j,2} \in \cH_2$ and $\varepsilon_{j, 2}>0$, and $\forall j\in I_3, h_{j,3} \in \cH_1 \otimes \cH_2$ and $\varepsilon_{j, 3}>0$. Additionally, denote $\hat{\otimes} :E_1 \oplus E_2 \to E_1 \otimes_\varepsilon E_2$ by $\hat{\otimes}(x, y) = x\otimes y$. Then
\begin{align*}
(\cL_1 \times \cL_2)&\bigg[ \bigcap_{j\in I_1} \Big\{ |h_{j,1}|<\varepsilon_{j, 1}\Big\} \bigcap_{j\in I_2} \Big\{ |h_{j, 2}|< \varepsilon_{j, 2}\Big\} \bigcap_{j\in I_3} \Big\{ |h_{j, 3}(\hat{\otimes})|<\varepsilon_{j,3}\Big\} \bigg]
\\
\geq &\prod_{j \in I_1} \cL_1\bigg[ \Big\{ |h_{j, 1}|<\varepsilon_{j, 1}\Big\}\bigg] \cdot \prod_{j\in I_2} \cL_2\bigg[\Big\{ |h_{j, 2}|<\varepsilon_{j, 2}\Big\}\bigg] \cdot \prod_{j \in I_3} (\cL_1\times \cL_2)\bigg[ \Big\{ |h_{j,3}(\hat{\otimes})|<\varepsilon_{j, 3}\Big\}\bigg]
\end{align*}
\end{lemma}
\begin{proof}
When $I_3=\emptyset$, Lemma \ref{Lemma:Sidak2} comes immediately by applying Lemma \ref{Lemma:Sidak1}. When $I_3\neq \emptyset$, the bilinear forms $f_{j, 3}(\hat{\otimes})$ are not bounded on $E_1 \oplus E_2$ and so we cannot immediately apply Lemma \ref{Lemma:Sidak1} (they are bounded on the space $E_1 \otimes_\varepsilon E_2$).
However, we do have that for $y\in E_2$ fixed, the functional $x\mapsto h( x\otimes y)$ is a linear functional and for $x\in E_1$ fixed, the functional $y\mapsto h( x\otimes y)$ is a linear functional (not necessarily bounded functionals). Thus, by the definition of the product measure, we have
\begin{align*}
(\cL_1 \times \cL_2)&\bigg[ \bigcap_{j\in I_1} \Big\{ |h_{j,1}|<\varepsilon_{j, 1}\Big\} \bigcap_{j\in I_2} \Big\{ |h_{j, 2}|< \varepsilon_{j, 2}\Big\} \bigcap_{j\in I_3} \Big\{ |h_{j, 3}(\hat{\otimes})|<\varepsilon_{j,3}\Big\} \bigg]
\\
=&\int_{E_2}\int_{E_1} \prod_{j\in I_1} \1_{\{|h_{j,1}|<\varepsilon_{j, 1} \}}(x) \cdot \prod_{j\in I_2} \1_{\{|h_{j,2}|<\varepsilon_{j, 2}\}}(y) \cdot \prod_{j\in I_3} \1_{\{|h_{j,3}(\hat{\otimes })|<\varepsilon_{j,3}\}}(x,y) d\cL_1(x) d\cL_2(y)
\\
\geq&\int_{E_2}\prod_{j\in I_2} \1_{\{|h_{j,2}|<\varepsilon_{j, 2}\}}(y) \cdot \int_{E_1} \prod_{j\in I_1} \1_{\{|h'_{j,1}|<\varepsilon_{j, 1} \}}(x) \cdot \prod_{j\in I_3} \1_{\{|h'_{j,3}(\hat{\otimes })|<\varepsilon_{j,3}\}}(x,y) d\cL_1(x) d\cL_2(y)
\end{align*}
where for each $j\in I_1$ $\|h'_{j, 1}\|_\cH = \| h_{j, 1}\|_\cH$, for each $j\in I_3$ and $y\in E_2$ fixed $\|h'_{j,3}(\cdot \otimes y)\|_\cH = \|h_{j,3}(\cdot \otimes y)\|_\cH$, and the vectors $\{h'_{j, 1}\}_{j\in I_1}\cup \{h'_{j, 3}(\cdot \otimes y)\}_{j\in I_3}$ are orthonormal in $\cH$. This comes from applying Equation \eqref{eq:Lemma:Sidak1.2} from Lemma \ref{Lemma:Sidak1}.
Similarly, swapping the order of integration and repeating yields
\begin{align*}
\geq&\int_{E_1}\prod_{j\in I_1} \1_{\{|h'_{j,1}|<\varepsilon_{j, 1} \}}(x) \cdot \int_{E_2} \prod_{j\in I_2} \1_{\{|h'_{j,2}|<\varepsilon_{j, 2}\}}(y) \cdot \prod_{j\in I_3} \1_{\{|h''_{j,3}(\hat{\otimes })|<\varepsilon_{j,3}\}}(x,y) d\cL_2(y) d\cL_1(x)
\\
\geq& \prod_{j \in I_1} \cL_1\bigg[ \Big\{ |h'_{j, 1}|<\varepsilon_{j, 1}\Big\}\bigg] \cdot \prod_{j\in I_2} \cL_2\bigg[\Big\{ |h'_{j, 2}|<\varepsilon_{j, 2}\Big\}\bigg] \cdot \prod_{j \in I_3} (\cL_1\times \cL_2)\bigg[ \Big\{ |h''_{j,3}(\hat{\otimes})|<\varepsilon_{j, 3}\Big\}\bigg]
\end{align*}
where for each $j\in I_2$ $\|h'_{j,2}\|_\cH = \|h_{j,2}\|_\cH$, for each $j\in I_3$ and $x\in E_1$ fixed $\| h'_{j, 3}(x \otimes \cdot)\|_\cH = \| h''_{j,3}(x \otimes \cdot)\|_\cH$, and the vectors $\{ h'_{j, 2}\}_{j\in I_2} \cup \{h''_{j,3}(x \otimes \cdot)\}_{j\in I_3}$ are orthonormal in $\cH$.
\end{proof}
In fact, rather than dividing this intersection of sets into a product of probabilities completely (as will be necessary later in this paper), we could have used Equation \eqref{eq:GaussCorrConj} to divide the intersection into the product of any number of two intersections. We do not state this to avoid writing already challenging notation and because there is no need for such a result in Section \ref{section:SmallBallProbab}.
\begin{proposition}[\v{S}id\'ak's Lemma for higher order Wiener-It\^o chaos elements.]
\label{pro:Sidak}
Let $m$ be a positive integer. Let $(E_1, \cH_1, i_1)$, ..., $(E_m, \cH_m, i_m)$ be $m$ abstract Wiener spaces with Gaussian measures $\cL_1$, ..., $\cL_m$. Let $\cL_1 \times ... \times \cL_m$ be the product measure over the direct sum $E_1 \oplus ... \oplus E_m$. Let $I_1$, $I_2$, ..., $I_m$ be $m$ countable indexes. Suppose that for $l\in\{ 1, ..., m\}$, $\forall j\in I_l$
$$
h_{j, l} \in \bigcup_{\substack{k_1, ..., k_l\\ k_1 \neq ... \neq k_l}} \cH_{k_1} \otimes ... \otimes \cH_{k_l}, \quad \varepsilon_{j, l}>0.
$$
Next, suppose
$$
\hat{\otimes}_l:E_{k_1} \oplus ... \oplus E_{k_l} \to E_{k_1} \otimes_\varepsilon ... \otimes_\varepsilon E_{k_l}, \quad \hat{\otimes}(x_{k_1}, ..., x_{k_l}) := x_{k_1} \otimes ... \otimes x_{k_l}
$$
Then
\begin{align*}
\Big(\cL_1 \times ... \times \cL_m\Big)\Bigg[ \bigcap_{l=1}^m \bigcap_{j\in I_l} \Big\{ |h_{j, l}(\hat{\otimes}_l)|<\varepsilon_{j, l}\Big\} \Bigg] \geq \prod_{l=1}^m\prod_{j\in I_l} \Big( \cL_1 \times ... \times \cL_m\Big) \Bigg[ \Big\{ |h_{j, l}(\hat{\otimes}_l)|<\varepsilon_{j, l}\Big\} \Bigg]
\end{align*}
\end{proposition}
\begin{proof}
Extensive applications of the methods of Lemma \ref{Lemma:Sidak2} and Equation \eqref{eq:Lemma:Sidak1.2}.
\end{proof}
\section{Small ball probabilities for enhanced Gaussian processes}
\label{section:SmallBallProbab}
The first result, the main endeavour of this paper, demonstrates that for Gaussian rough paths the SBPs cannot converge ``faster'' than a function of the regularity of the covariance.
\begin{theorem}
\label{Thm:SmallBallProbab}
Let $\cL^W$ be a Gaussian measure satisfying Assumption \ref{assumption:GaussianRegularity} for some $\varrho\in[1, 3/2)$ and let $\rw$ be the lifted Gaussian rough path. Then for $\tfrac{1}{3} <\alpha<\tfrac{1}{2\varrho}$ we have
\begin{equation}
\label{eq:ThmSmallBallProbab}
\fB(\varepsilon):=-\log\Big( \bP\Big[ \| \rw \|_\alpha <\varepsilon\Big] \Big) \lesssim \varepsilon^{\frac{-1}{\tfrac{1}{2\varrho} - \alpha}}.
\end{equation}
\end{theorem}
Secondly, we demonstrate that provided the covariance of a Gaussian process is adequately irregular, the small ball probabilities cannot converge slower than this same speed.
\begin{proposition}
\label{pro:SmallBallProbab-Lower}
Let $\cL^W$ be a Gaussian measure and for $s, t\in [0,T]$, let $\bE\Big[ \big| W_{s, t} \big|^2 \Big] = \sigma^2\Big( \big| t-s \big| \Big)$.
Suppose that $\exists h>0$ such that $h\leq T$ and
\begin{enumerate}
\item $\exists C_1>0$ such that $\forall \tau \in [0,h)$,
\begin{equation}
\label{eq:Thm:SmallBallProbab-Lower-1}
\sigma^2\Big( \tau \Big) \geq C_1 \cdot |\tau|^{\tfrac{1}{\varrho}}.
\end{equation}
\item Suppose that there exits $0<C_2<4$ such that for $\tau \in [0,\tfrac{h}{2})$,
\begin{equation}
\label{eq:Thm:SmallBallProbab-Lower-2}
\sigma^2\Big( 2\tau\Big) \leq C_2 \cdot \sigma^2\Big( \tau\Big).
\end{equation}
\item Suppose that $\sigma^2$ is three times differentiable and there exists a constant $C_3>0$ such that $\forall \tau \in [0,h)$,
\begin{equation}
\label{eq:Thm:SmallBallProbab-Lower-3}
\Big| \nabla^3 \big[ \sigma^2 \big] \Big( \tau \Big) \Big| \leq \frac{C_3}{\tau^{3 - \tfrac{1}{\varrho}}}.
\end{equation}
\end{enumerate}
Then
$$
-\log\Big( \bP\Big[ \| W \|_\alpha < \varepsilon \Big] \Big) \gtrsim \varepsilon^{\tfrac{-1}{\tfrac{1}{2\varrho} - \alpha}} .
$$
\end{proposition}
\begin{theorem}
\label{Thm:SmallBallProbab-Both}
Let $\tfrac{1}{3} <\alpha<\tfrac{1}{2\varrho}$. Let that $\cL^W$ be a Gaussian measure and $\forall s, t\in [0,T]$, let $\bE\Big[ \big| W_{s, t} \big|^2 \Big] = \sigma^2\Big( \big| t-s \big| \Big)$.
Suppose that $\exists h>0$ and $c_1, c_2>0$ such that $\forall \tau \in [0,h)$, $\sigma^2(\tau)$ is convex and
\begin{equation}
\label{eq:Thm:SmallBallProbab-Both}
c_1 \cdot |\tau|^{\tfrac{1}{\varrho}} \leq \sigma^2 \Big( \tau \Big) \leq c_2 \cdot | \tau |^{\tfrac{1}{\varrho}}.
\end{equation}
Then $\cR^W$ satisfies Assumption \ref{assumption:GaussianRegularity}. Suppose additionally that $\frac{c_2 \cdot 2^{\tfrac{1}{\varrho}}}{c_1}<4$. Then Equation \eqref{eq:Thm:SmallBallProbab-Lower-2} is satisfied.
Additionally, if $\sigma^2$ satisfies Equation \eqref{eq:Thm:SmallBallProbab-Lower-3}, then
$$
\fB(\varepsilon) \approx \varepsilon^{\frac{-1}{\tfrac{1}{2\varrho} - \alpha}}.
$$
\end{theorem}
\begin{example}
Fractional Brownian motion is the Gaussian process with covariance
$$
\bE\Big[ W^{H}_t \otimes W^{H}_s\Big] = \tfrac{d \cdot I_d}{2}\Big( |t|^{2H} + |s|^{2H} - |t-s|^{2H}\Big),
$$
where $I_d$ is the $d$-dimensional identity matrix. It is well known that the covariance of Fractional Browian motion satisfies
$$
\bE\Big[ \big| W^{H}_{s, t} \big|^2 \Big] = d \cdot \big| t-s \big|^{2H},
$$
so that the assumptions of Theorem \ref{Thm:SmallBallProbab-Both} are satisfied. The small ball probabilities of Fractional Brownian motion were studied in \cite{kuelbs1995small} with respect to the H\"older norm using that it has stationary increments, and Theorem \ref{Thm:SmallBallProbab-Both} extends these results to enhanced fractional Brownian motion when $H \in (\tfrac{1}{3}, \tfrac{1}{2})$.
Further, the upper and lower bounds of Equation \eqref{eq:Thm:SmallBallProbab-Both} are only required locally around 0, so these results also apply for the fractional Brownian bridge (which fails to satisfy \eqref{eq:Thm:SmallBallProbab-Both} for $\tau>T/2$).
\end{example}
\subsection{Preliminaries}
Firstly, we address a method for discretising the rough path H\"older norm. To the best of the authors knowledge, this result has not previously been stated in the framework of rough paths. The proof is an adaption of the tools used in \cite{kuelbs1995small}*{Theorem 2.2}.
\begin{lemma}[Discretisation of Rough path norms]
\label{lemma:DiscretisationNorm}
Let $\rw \in WG\Omega_\alpha(\bR^d)$ be a rough path. Then we have
\begin{align*}
\| \rw \|_\alpha &\leq \max\Bigg( 2\sum_{l=1}^\infty \sup_{ i=1, ..., 2^l} \frac{\| \rw_{(i-1)T2^{-l}, iT2^{-l}} \|_{cc}}{\varepsilon^\alpha} ,
\\
& \qquad \qquad 3 \sup_{j\in \bN_0 } \sup_{ i=0, ... \left\lfloor \tfrac{2^j(T-\varepsilon)}{\varepsilon}\right\rfloor } \sum_{l=j+1}^\infty \sup_{m=1, ..., 2^{l-j}} \frac{ \| \rw_{(m-1)2^{-l} + i \varepsilon 2^{-j}, m2^{-l} + i \varepsilon 2^{-j}} \|_{cc} }{ \varepsilon^\alpha 2^{-\alpha(j+1)} } \Bigg).
\end{align*}
\end{lemma}
\begin{proof}
Let $0< \varepsilon <T$.
\begin{equation}
\label{eq:lemma:DiscretisationNorm1}
\| \rw\|_\alpha \leq \sup_{\substack{ s, t\in [0,T]\\ |t-s|\geq\varepsilon}} \frac{ \| \rw_{s, t}\|_{cc}}{|t-s|^{\alpha}} \bigvee \sup_{\substack{ s, t\in[0,T] \\ |t-s|<\varepsilon}} \frac{ \| \rw_{s, t}\|_{cc}}{|t-s|^{\alpha}}
\leq \sup_{s\in[0,T]} \frac{2 \| \rw_{0,s}\|_{cc}}{\varepsilon^\alpha} \bigvee \sup_{\substack{0\leq s \leq T\\ 0\leq t \leq \varepsilon \\ |s+t|<T}} \frac{\| \rw_{s, s+t}\|_{cc}}{ |t|^\alpha}
\end{equation}
Firstly, writing $s\in[0,T]$ as a sum of dyadics and exploiting the sub-additivity of the Carnot-Carath\'eodory norm, we get
\begin{equation}
\label{eq:lemma:DiscretisationNorm1.1}
\| \rw_{0,s}\|_{cc} \leq \sum_{l=1}^\infty \sup_{i=1, ..., 2^l} \| \rw_{(i-1)T2^{-l}, iT2^{-l}} \|_{cc}.
\end{equation}
Hence
\begin{equation}
\label{eq:lemma:DiscretisationNorm1.2}
\sup_{s\in[0,T]} \frac{2 \| \rw_{0,s}\|_{cc}}{\varepsilon^\alpha} \leq \sum_{l=1}^\infty \sup_{i=1, ..., 2^l} \frac{ 2\| \rw_{(i-1)T2^{-l}, iT2^{-l}} \|_{cc}}{\varepsilon^\alpha} .
\end{equation}
Secondly,
\begin{align*}
\sup_{\substack{0\leq s \leq T\\ 0\leq t \leq \varepsilon}} \frac{\| \rw_{s, s+t}\|_{cc}}{ |t|^\alpha} \leq& \sup_{s\in [0,T]} \max_{j\in \bN_0} \sup_{\varepsilon 2^{-j-1}\leq t<\varepsilon 2^{-j}} \frac{\| \rw_{s, s+t}\|_{cc}}{ |t|^\alpha}
\\
\leq& \max_{j\in \bN_0} \sup_{s\in [0,T]} \sup_{0<t<\varepsilon 2^{-j}} \frac{\| \rw_{s, s+t}\|_{cc}}{ |\varepsilon|^\alpha \cdot 2^{-\alpha(j+1)}}
\\
\leq& \max_{j\in \bN_0} \max_{i=0, ..., \left\lfloor \tfrac{2^j(T-\varepsilon)}{\varepsilon}\right\rfloor } \sup_{0<t<\varepsilon 2^{-j}} \frac{3 \| \rw_{i\varepsilon 2^{-j}, i\varepsilon 2^{-j}+t}\|_{cc}}{ |\varepsilon|^\alpha \cdot 2^{-\alpha(j+1)}}.
\end{align*}
Then, as with Equation \eqref{eq:lemma:DiscretisationNorm1.1} we have that for $t\in (0, \varepsilon 2^{-j})$,
$$
\| \rw_{i\varepsilon 2^{-j}, i\varepsilon 2^{-j}+t}\|_{cc} \leq \sum_{l=j+1}^\infty \sup_{m=1, ..., 2^{l-j}} \| \rw_{(m-1)2^{-l} + i \varepsilon 2^{-j}, m2^{-l} + i \varepsilon 2^{-j}} \|_{cc}.
$$
Hence
\begin{equation}
\label{eq:lemma:DiscretisationNorm1.3}
\sup_{\substack{0\leq s \leq T\\ 0\leq t \leq \varepsilon \\ |s+t|<T}} \frac{\| \rw_{s, s+t}\|_{cc}}{ |t|^\alpha} \leq \sup_{j\in \bN_0 } \sup_{ i=0, ... \left\lfloor \tfrac{2^j(T-\varepsilon)}{\varepsilon}\right\rfloor } \sum_{l=j+1}^\infty \sup_{m=1, ..., 2^{l-j}} \frac{ 3 \| \rw_{(m-1)2^{-l} + i \varepsilon 2^{-j}, m2^{-l} + i \varepsilon 2^{-j}} \|_{cc} }{ \varepsilon^\alpha 2^{-\alpha(j+1)} }.
\end{equation}
Combining Equation \eqref{eq:lemma:DiscretisationNorm1} with Equation \eqref{eq:lemma:DiscretisationNorm1.2} and Equation \eqref{eq:lemma:DiscretisationNorm1.3} yields the result.
\end{proof}
\subsection{Proof of Theorem \ref{Thm:SmallBallProbab}}
The proof of the upper bound is the main contribution of this Section.
\begin{proof}
Let $n_0$ be a positive integer such that $\varepsilon^{-1} \leq 2^{n_0} \leq 2\varepsilon^{-1}$ and denote $\beta = \tfrac{1}{2\varrho} - \alpha$ for brevity. Define
\begin{align}
\nonumber
\varepsilon_l^{(1)}:=& \Big( \frac{3}{2} \Big)^{\tfrac{-|l-n_0|}{2\varrho}} \varepsilon^{\tfrac{1}{2\varrho}} \cdot \frac{(1-2^{-\beta/2})}{4},
\\
\label{eq:ThmSmallBallProbab3.2}
\varepsilon_{j,l}^{(2)}:=& \frac{\varepsilon^{\beta} 2^{\tfrac{-l}{2\varrho}} }{3} \cdot \frac{2^{\beta(l+j)/2} (1-2^{-\beta/2})}{2^{-\alpha(j+1)} }.
\end{align}
Observe that these satisfy the properties
\begin{align*}
\sum_{l=1}^\infty \varepsilon_l^{(1)} \leq \frac{ \varepsilon^{\tfrac{1}{2\varrho}} }{2} \quad\textrm{and}\quad
\sum_{l=j+1}^\infty \varepsilon_{j,l}^{(2)} \leq \frac{ \varepsilon^\beta }{3}.
\end{align*}
Therefore, using Lemma \ref{lemma:DiscretisationNorm} gives the lower bound
\begin{align}
\nonumber
&\bP\Big[ \| \rw\|_\alpha \leq \varepsilon^\beta \Big]
\geq \bP\Bigg[ \sup_{i=1, ..., 2^l} \| \rw_{(i-1)T2^{-l}, iT2^{-l}} \|_{cc} \leq \varepsilon_l^{(1)} \quad \forall l\in \bN,
\\
\label{eq:Thm:SmallBallProbab1}
& \sup_{i=0, ..., \left\lfloor \tfrac{2^{j}(T-\varepsilon)}{\varepsilon}\right\rfloor} \sup_{m=1, ..., 2^{l-j}} \frac{\| \rw_{(m-1)2^{-l}\varepsilon + i2^{-j}\varepsilon, m2^{-l}\varepsilon + i2^{-l} \varepsilon} \|_{cc}}{ \varepsilon^\alpha 2^{-\alpha(j+1)}} \leq \varepsilon^{(2)}_{j, l} \quad \forall l\geq j+1, j,l\in \bN_0 \Bigg].
\end{align}
Next, using the equivalence of the Homogeneous norm from Equation \ref{eq:HomoNorm}, we have that there exists a constant dependent only on $d$ such that
\begin{align*}
\| \rw_{s, t}\|_{cc} \leq c(d) \cdot \sup_{A\in \cA_2} \Big| \langle \log_{\boxtimes} (\rw_{s, t}), e_A\rangle \Big|^{1/|A|}
\end{align*}
Using that all homogeneous norms are equivalent and Equation \eqref{eq:HomoNorm}, we get the representation
\begin{align*}
\| \rw_{s, t}\|_{cc} \leq& c(d) \sup_{p=1, ..., d} \Big| \Big\langle \rw_{s, t}, e_{p} \Big\rangle \Big|
\bigvee \sup_{\substack{p,q=1, ..., d\\ p\neq q}} \Big| \Big\langle \rw_{s, t}, e_{p, q} \Big\rangle \Big|^{1/2}
\end{align*}
Applying Proposition \ref{pro:Sidak} to this yields
\begin{align}
\nonumber
\bP\Big[& \| \rw\|_\alpha \leq \varepsilon^\beta\Big]
\\
\nonumber
\geq& \Bigg\{ \prod_{l=1}^\infty \prod_{i=1}^{2^l}
\Bigg(
\prod_{p=1}^d \bP\Big[ \Big| \Big\langle \rw_{(i-1)T2^{-l}, iT2^{-l}}, e_{p}\Big\rangle \Big| \leq \tfrac{\varepsilon_l^{(1)}}{c(d)} \Big]
\\
\nonumber
&\quad \cdot \prod_{\substack{p,q=1\\ p\neq q}}^d \bP\Big[ \Big| \Big\langle \rw_{(i-1)T2^{-l}, iT2^{-l} }, e_{p,q}\Big\rangle \Big| \leq \Big(\tfrac{\varepsilon_l^{(1)}}{c(d)}\Big)^2 \Big] \Bigg) \Bigg\}
\\
\nonumber
&\times\Bigg\{
\prod_{j=0}^\infty \prod_{l=j+1}^\infty \prod_{i=0}^{\left\lfloor 2^j(T-\varepsilon)/\varepsilon \right\rfloor} \prod_{m=1}^{2^{l-j}} \Bigg(
\prod_{p=1}^d \bP\Big[ \Big| \Big\langle \rw_{\varepsilon (m-1)2^{-l} + \varepsilon i2^{-j}, \varepsilon m2^{-l} + \varepsilon i2^{-j}}, e_{p}\Big\rangle \Big| \leq \tfrac{\varepsilon_{j,l}^{(2)}\cdot \varepsilon^\alpha 2^{-\alpha(j+1)}}{c(d)} \Big]
\\
\label{eq:ThmSmallBallProbab1.1}
&\quad \cdot \prod_{\substack{p,q=1\\p\neq q}}^d \bP\Big[ \Big| \Big\langle \rw_{\varepsilon (m-1)2^{-l} + \varepsilon i2^{-j}, \varepsilon m2^{-l} + \varepsilon i2^{-j}}, e_{p,q}\Big\rangle \Big| \leq \Big( \tfrac{\varepsilon_{j,l}^{(2)}\cdot \varepsilon^\alpha 2^{-\alpha(j+1)}}{c(d)}\Big)^2 \Big] \Bigg) \Bigg\}.
\end{align}
For the terms associated to words of length 1, the computation of this probability under Assumption \ref{assumption:GaussianRegularity} is simply
\begin{align}
\bP\Big[ |W_{s, t}| \leq \varepsilon\Big] =
\erf\Big( \tfrac{\varepsilon}{\sqrt{2} \bE[ |W_{s, t}|^2]^{1/2}} \Big)
\label{eq:ThmSmallBallProbab4.1}
\geq& \erf\Big( \tfrac{\varepsilon}{\sqrt{2} M |t-s|^{1/2\varrho}} \Big).
\end{align}
For longer words, we only attain the lower bound.
\begin{align}
\nonumber
\bP\bigg[ \Big| \Big\langle \int_s^t W_{s, r}\otimes dW_r, e_{p, q}\Big\rangle \Big| < \varepsilon \bigg] =& \bE \Bigg[ \bP\Bigg[ \Big| \Big\langle \int_s^t W_{s, r}\otimes dW_r, e_{p, q}\Big\rangle \Big| < \varepsilon\Bigg| \sigma\Big(\langle W, e_p\rangle \Big) \Bigg] \Bigg]
\\
\nonumber
=& \bE\Big[ \erf\Big( \tfrac{\varepsilon}{\sqrt{2} \| W_{s, \cdot} \1_{(s, t)} \|_\cH} \Big) \Big]
\\
\geq& \erf\Big( \tfrac{\varepsilon}{\sqrt{2} \bE[ \| W_{s, \cdot} \1_{(s, t)} \|_\cH^2]^{1/2} } \Big)
\label{eq:ThmSmallBallProbab4.2}
\geq \erf\Big( \tfrac{\varepsilon}{\sqrt{2} M|t-s|^{2/(2\varrho)} } \Big).
\end{align}
We also use the lower bounds
\begin{align}
\label{eq:ThmSmallBallProbab2.1}
\erf\Big( \tfrac{t}{\sqrt{2}} \Big) \geq& \tfrac{t}{2} & \mbox{ for } t\in[0,1],
\\
\label{eq:ThmSmallBallProbab2.2}
\erf\Big( \tfrac{st}{\sqrt{2}} \Big) \geq& \exp\Bigg( \frac{-\exp\Big( \tfrac{-(st)^2}{2}\Big)}{1-\exp\Big( \tfrac{-s^2}{2} \Big)} \Bigg) & \mbox{ for } s>0, t\in [1, \infty).
\end{align}
We now consider the terms from Equation \ref{eq:ThmSmallBallProbab1.1} with the product over $(j, l, i, m)$. By Assumption \ref{assumption:GaussianRegularity}, the expression \eqref{eq:ThmSmallBallProbab3.2} and Equation \ref{eq:ThmSmallBallProbab4.1} we have
\begin{align*}
\bP\Big[ \Big| \Big\langle \rw_{\varepsilon (m-1)2^{-l} + \varepsilon i2^{-j}, \varepsilon m2^{-l} + \varepsilon i2^{-j}}, e_{p}\Big\rangle \Big| \leq \tfrac{\varepsilon_{j,l}^{(2)}\cdot \varepsilon^\alpha 2^{-\alpha(j+1)}}{c(d)} \Big]
\geq \erf\Bigg( \tfrac{(1-2^{-\beta/2}) }{3Mc(d)} \cdot 2^{\beta(l+j)/2} \Bigg).
\end{align*}
By similarly applying Equation \ref{eq:ThmSmallBallProbab4.2}
\begin{align*}
\bP\Big[& \Big| \Big\langle \rw_{\varepsilon (m-1)2^{-l} + \varepsilon i2^{-j}, \varepsilon m2^{-l} + \varepsilon i2^{-j}}, e_{p,q}\Big\rangle \Big| \leq \Big( \tfrac{\varepsilon_{j,l}^{(2)}\cdot \varepsilon^\alpha 2^{-\alpha(j+1)}}{c(d)}\Big)^2 \Big]
\geq \erf\Bigg( \Big(\tfrac{(1-2^{-\beta/2}) }{3Mc(d)}\Big)^2 \cdot 2^{2\beta(l+j)/2} \Bigg).
\end{align*}
Next, we denote $s = \frac{(1-2^{-\beta/2)}) }{3Mc(d)}$, apply the lower bound \eqref{eq:ThmSmallBallProbab2.2} and multiply all the terms together correctly to obtain
\begin{align}
\nonumber
\prod_{j=0}^\infty \prod_{l=j+1}^\infty& \exp\Bigg( \frac{-T2^l}{\varepsilon} \cdot \Bigg[ \frac{d}{1-e^{-s^2/2}} \exp\Big( \tfrac{-s^2}{2} 2^{\beta(l+j)} \Big) + \frac{d(d-1)}{2(1-e^{-s^4/2})} \exp\Big( \tfrac{-s^4}{2} 2^{2\beta(l+j)} \Big) \Bigg] \Bigg)
\\
\label{eq:ThmSmallBallProbab5.1}
&\geq \exp\Bigg( -\frac{c_1(d, T, M, \beta)}{\varepsilon} \Bigg).
\end{align}
Secondly, we consider the terms from Equation \ref{eq:ThmSmallBallProbab1.1} with the product over $(l,i)$ and restrict ourselves to the case where $l>n_0$. By applying the definition of $n_0$, $\varepsilon_l^{(1)}$ and using Assumption \ref{assumption:GaussianRegularity}
\begin{align*}
\bP\Big[& \Big| \Big\langle \rw_{(i-1)2^{-l}, i2^{-l}}, e_{p}\Big\rangle \Big| \leq \tfrac{\varepsilon_l^{(1)}}{c(d)} \Big]
\geq \erf\Bigg( \Big(\tfrac{4}{3}\Big)^{\tfrac{l-n_0}{2\varrho}} \cdot \tfrac{(1-2^{-\beta/2})}{4Mc(d) \sqrt{2}} \Bigg).
\end{align*}
Similarly, by using Equations \eqref{eq:ThmSmallBallProbab4.2},
\begin{align*}
\bP\Big[ \Big| \Big\langle \rw_{(i-1)2^{-l}, i2^{-l} }, e_{p,q}\Big\rangle \Big| \leq \Big(\tfrac{\varepsilon_l^{(1)}}{c(d)}\Big)^2 \Big]
\geq \erf\Bigg( \Big(\tfrac{4}{3}\Big)^{\tfrac{2(l-n_0)}{2\varrho}} \cdot \tfrac{1}{\sqrt{2}} \cdot \Big(\tfrac{(1-2^{-\beta/2})}{4Mc(d)}\Big)^2 \Bigg).
\end{align*}
Now applying Equation \eqref{eq:ThmSmallBallProbab2.2} and multiplying all the terms together gives
\begin{align}
\nonumber
\prod_{l=n_0+1}^\infty \exp\Bigg(& -2^l\Bigg[ \frac{d}{1-e^{-s^2/2}} \exp\Big( -\tfrac{s^2}{2}\Big(\tfrac{4}{3}\Big)^{\tfrac{l-n_0}{2\varrho}}\Big)
+ \frac{d(d-1)}{2(1-e^{-s^4/2})} \exp\Big(-\tfrac{s^4}{2} \Big( \tfrac{4}{3}\Big)^{\tfrac{2(l-n_0)}{2\varrho}} \Big) \Bigg] \Bigg)
\\
\label{eq:ThmSmallBallProbab5.2}
\geq& \exp\Big( -2^{n_0} c_2(d, T, M, \beta)\Big) \geq \exp\Big( -\tfrac{2c_2(d, T, M, \beta)}{\varepsilon} \Big)
\end{align}
where $s=\Big(\tfrac{(1-2^{-\beta/2})}{4Mc(d)}\Big)$.
Finally, we come to the terms from Equation \ref{eq:ThmSmallBallProbab1.1} with the product over $(l,i)$ where we consider the remaining terms for $l=0, ..., n_0$. Using the definition of $\varepsilon$ and Assumption \ref{assumption:GaussianRegularity}
\begin{align*}
\bP\Big[& \Big| \Big\langle \rw_{(i-1)2^{-l}, i2^{-l}}, e_{p}\Big\rangle \Big| \leq \tfrac{\varepsilon_l^{(1)}}{c(d)} \Big]
\geq
\erf\Bigg( \Big( \tfrac{1}{3}\Big)^{\tfrac{n_0-l}{2\varrho}} \cdot \tfrac{1}{\sqrt{2}} \cdot\tfrac{1-2^{-\beta/2}}{4Mc(d)} \Bigg).
\end{align*}
Similarly, by using Equations \eqref{eq:ThmSmallBallProbab4.2},
\begin{align*}
\bP\Big[ \Big| \Big\langle \rw_{(i-1)2^{-l}, i2^{-l} }, e_{p,q}\Big\rangle \Big| \leq \Big(\tfrac{\varepsilon_l^{(1)}}{c(d)}\Big)^2 \Big]
\geq
\erf\Bigg( \Big( \tfrac{1}{3}\Big)^{\tfrac{2(n_0-l)}{2\varrho}} \cdot \tfrac{1}{\sqrt{2}} \cdot \Big( \tfrac{1-2^{-\beta/2}}{4Mc(d)}\Big)^2 \Bigg).
\end{align*}
For these terms, we use the lower bound \eqref{eq:ThmSmallBallProbab2.1} and multiply all the terms together to get
\begin{align}
\nonumber
\prod_{l=0}^{n_0}& \Bigg( \Bigg[ \tfrac{1}{2} \cdot \tfrac{(1-2^{-\beta/2})}{4Mc(d)} \cdot \Big( \tfrac{1}{3}\Big)^{\tfrac{(n_0-l)}{2\varrho}} \Bigg]^{dT2^l} \cdot
\Bigg[ \tfrac{1}{2} \cdot \Big(\tfrac{(1-2^{-\beta/2})}{4Mc(d)}\Big)^2 \cdot \Big( \tfrac{1}{3}\Big)^{\tfrac{2(n_0-l)}{2\varrho}} \Bigg]^{\tfrac{d(d-1)T2^l}{2}} \Bigg)
\\
\nonumber
\geq& \exp\Bigg( -2^{n_0} \sum_{l=1}^{n_0} \Bigg[ dT\bigg( 2^{-(n_0-l)} \log\Big( 2\cdot \Big(\tfrac{4Md(c)}{1-2^{-\beta/2}}\Big)\Big) + (n_0-l)\log\Big( 3^{\tfrac{1}{2\varrho}} \Big) \bigg)
\\
\nonumber
&\qquad +\tfrac{d(d-1)}{2} \bigg( 2^{-(n_0-l)} \log\Big( 2\cdot \Big(\tfrac{4Md(c)}{1-2^{-\beta/2}}\Big)^2 \Big) + (n_0-l)\log\Big( 3^{\tfrac{2}{2\varrho}} \Big) \bigg) \Bigg] \Bigg)
\\
\label{eq:ThmSmallBallProbab5.3}
&\geq \exp\Big( -2^{n_0} c_3(d, T, M, \beta) \Big) \geq \exp\Big( -\tfrac{2c_3(d, T, M, \beta)}{\varepsilon} \Big).
\end{align}
Combining Equations \eqref{eq:ThmSmallBallProbab5.1}, \eqref{eq:ThmSmallBallProbab5.2} and \eqref{eq:ThmSmallBallProbab5.3} gives that
\begin{align*}
\eqref{eq:ThmSmallBallProbab1.1} \geq \exp\Big( -\tfrac{(c_1+c_2+c_3)}{\varepsilon} \Big)
\quad \Rightarrow\quad
-\log\Big( \bP\Big[ \| \rw\|_\alpha \leq \varepsilon^\beta\Big] \Big) \lesssim \varepsilon^{-1} .
\end{align*}
\end{proof}
\subsection{Proof of Theorem \ref{Thm:SmallBallProbab-Both}}
This first result is a canonical adaption of the proof found in \cite{stolz1996some}*{Theorem 1.4} with the differentiability requirements weakened. We emphasise this proof is not original and included only for completeness.
\begin{proof}[Proof of Proposition \ref{pro:SmallBallProbab-Lower}]
By the Cielsielski isomorphism (see \cite{herrmann2013stochastic}), we have that there exists a constant $\tilde{C}>0$ such that
$$
\sup_{s, t\in[0,T]} \frac{|W_{s, t}|}{|t-s|^{\alpha}} \geq \tilde{C} \sup_{p\in \bN_0} \sup_{m=1, ..., 2^{p}} 2^{p(\alpha-1/2)} |W_{(p,m)}|
$$
where
$$
W_{(p,m)} = 2^{p/2} \Big( W_{\tfrac{m-1}{T\cdot 2^{p}}, \tfrac{2m-1}{T\cdot 2^{p+1}}} - W_{\tfrac{2m-1}{T\cdot 2^{p+1}}, \tfrac{m}{T\cdot 2^{p}}} \Big).
$$
Then for $q\in \bN_0$ such that $\tfrac{h}{2} \leq \tfrac{1}{T\cdot 2^{q}}<h$ and $p>q$,
\begin{align*}
\| W \|_{\alpha}>& \tilde{C} \cdot \sup_{m=1, ..., 2^p} 2^{p(\alpha-1/2)} |W_{(p,m)}|
\\
>&\tilde{C} \cdot 2^{p(\alpha-1/2)} \cdot \sup_{n=0, ..., 2^{p-q}} \frac{1}{2^{p-q}} \sum_{m=1}^{2^{p-q}} \big| W_{(p,n\cdot 2^{p-q} + m)} \big|.
\end{align*}
Thus for some choice of $p>q$ and $n=0, ..., 2^{p-q}$,
$$
\bP\Big[ \| W \|_\alpha < \varepsilon \Big] \leq \bP\bigg[ \frac{1}{2^{p-q}} \cdot \sum_{m=1}^{2^{p-q}} \big| W_{(p,n\cdot 2^{p-q} +m)} \big| < \frac{\varepsilon \cdot 2^{p(1/2 - \alpha)}}{\tilde{C}} \bigg].
$$
From Equation \eqref{eq:Thm:SmallBallProbab-Lower-2} and Equation \eqref{eq:Thm:SmallBallProbab-Lower-1},
\begin{align*}
\bE\Big[ \big| W_{(p, m)} \big|^2 \Big] =& 2^{p} \bigg( 4 \cdot \sigma^2\Big( \tfrac{1}{2^{p+1}} \Big) - \sigma^2\Big( \tfrac{1}{2^p} \Big) \bigg)
\\
\geq& 2^p \cdot \frac{4-C_2}{4} \sigma^2\Big( \tfrac{1}{2^p} \Big) \geq 2^{p(1 - 1/\varrho)} \cdot \frac{C_1(4-C_2)}{4}.
\end{align*}
Renormalising the wavelets gives
\begin{align*}
&\bP\bigg[ \frac{1}{2^{p-q}} \cdot \sum_{m=1}^{2^{p-q}} \big| W_{(p,n\cdot 2^{p-q} +m)} \big| < \frac{\varepsilon \cdot 2^{p(1/2 - \alpha)}}{\tilde{C}} \bigg]
\\
&\leq
\bP\Bigg[ \frac{1}{2^{p-q}} \cdot \sum_{m=1}^{2^{p-q}} \frac{\big| W_{(p,n\cdot 2^{p-q} +m)} \big|}{\bE\Big[ \big| W_{(p,n\cdot 2^{p-q} +m)} \big|^2 \Big]^{1/2}} < \frac{\varepsilon}{\tilde{C}}\cdot \sqrt{\tfrac{4}{C_1(4-C_2)}} \cdot 2^{p(\tfrac{1}{2\varrho} - \alpha)} \Bigg].
\end{align*}
Now
\begin{align*}
\bE\Big[ W_{(p, m_1)} \cdot W_{(p, m_2)} \Big] =& \frac{-2^p}{2}\bigg( \sigma^2\Big( \tfrac{|m_1 - m_2 - 1|}{2^p} \Big) - 4 \sigma^2\Big( \tfrac{|m_1 - m_2 - 1/2|}{2^p} \Big) + 6\sigma^2\Big( \tfrac{|m_1 - m_2|}{2^p} \Big)
\\
& - 4\sigma^2\Big( \tfrac{|m_1 - m_2 + 1/2|}{2^p} \Big) + \sigma^2\Big( \tfrac{|m_1 - m_2 + 1|}{2^p} \Big) \bigg)
\end{align*}
By considering a Taylor expansion of the function
$$
f(x) = \sigma^2\Big( \tfrac{|n - x|}{2^p} \Big) - 4 \sigma^2\Big( \tfrac{|n - x/2|}{2^p} \Big) + 6\sigma^2\Big( \tfrac{|n|}{2^p} \Big) - 4\sigma^2\Big( \tfrac{|n + x/2|}{2^p} \Big) + \sigma^2\Big( \tfrac{|n + x|}{2^p} \Big)
$$
and using that $f(0) = f'(0) = f''(0) = 0$, we get that $\exists \xi\in[0,1]$ such that
\begin{align*}
&f(1)
\\
&= \tfrac{1}{12\cdot (2^p)^3} \Bigg( \bigg( \nabla^3\Big[ \sigma^2\Big]\Big( \tfrac{|n-\xi/2|}{2^p} \Big) - \nabla^3\Big[ \sigma^2\Big]\Big( \tfrac{|n+\xi/2|}{2^p} \Big) \bigg) - 2\bigg( \nabla^3\Big[ \sigma^2\Big]\Big( \tfrac{|n-\xi|}{2^p} \Big) - \nabla^3\Big[ \sigma^2\Big]\Big( \tfrac{|n+\xi|}{2^p} \Big) \bigg) \Bigg).
\end{align*}
Applying this representation with Equation \eqref{eq:Thm:SmallBallProbab-Lower-3} gives that
\begin{equation}
\label{eq:pro:SmallBallProbab-Lower:proof2}
\frac{\bE\Big[ W_{(p,m_1)} \cdot W_{(p, m_2)} \Big] }{\bE\Big[ \big| W_{(p,m_1)} \big|^2 \Big]^{1/2}\cdot \bE\Big[ \big| W_{(p,m_1)} \big|^2 \Big]^{1/2}} \leq \tfrac{C_1(4-C_2)C_3}{8} \Big( |m_1 - m_2| - 1 \Big)^{\tfrac{1}{\varrho} - 3}
\end{equation}
Taking $\varepsilon$ small enough so that there exists a $p>q$ such that
\begin{equation}
\label{eq:pro:SmallBallProbab-Lower:proof1}
\tfrac{\tilde{C}}{M_1} \cdot \sqrt{\tfrac{C_1(4-C_2)}{4}} \cdot 2^{(p+1)(\alpha - \tfrac{1}{2\varrho})}
\leq
\varepsilon
\leq
\tfrac{\tilde{C}}{M_1} \cdot \sqrt{\tfrac{C_1(4-C_2)}{4}} \cdot 2^{p(\alpha - \tfrac{1}{2\varrho})}
\end{equation}
where $M_1$ is chosen from Lemma \ref{lemma:Stolz_Lem} gives us
$$
\bP\Big[ \| W \|_\alpha < \varepsilon \Big]
\leq
\bP\Bigg[ \frac{1}{2^{p-q}} \cdot \sum_{m=1}^{2^{p-q}} \frac{\big| W_{(p,n\cdot 2^{p-q} +m)} \big|}{\bE\Big[ \big| W_{(p,n\cdot 2^{p-q} +m)} \big|^2 \Big]^{1/2}} < \frac{1}{M_1} \Bigg].
$$
Thanks to Equation \eqref{eq:pro:SmallBallProbab-Lower:proof2}, we can apply Lemma \ref{lemma:Stolz_Lem} to get
$$
\bP\Big[ \| W \|_\alpha < \varepsilon \Big]
\leq
\exp\Big( \tfrac{-2^{(p-q)}}{M_2} \Big).
$$
However, by Equation \eqref{eq:pro:SmallBallProbab-Lower:proof1}, we have
$$
2^{p-q} \leq T \cdot h \cdot \Big[ \tfrac{M_1}{\tilde{C}} \cdot \sqrt{\tfrac{4}{C_1(4-C_2)}} \cdot 2^{1/(2\varrho) - \alpha} \Big]^{\tfrac{1}{\alpha - 1/(2\varrho)}} \cdot \varepsilon^{\tfrac{1}{\alpha - \tfrac{1}{2\varrho}}}
$$
so that
$$
\log\Big( \bP\Big[ \| W \|_\alpha < \varepsilon \Big] \Big) \lesssim -\varepsilon^{\tfrac{1}{\tfrac{1}{2\varrho} - \alpha}}.
$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:SmallBallProbab-Both}]
Suppose that $\sigma^2$ satisfies Equation \eqref{eq:Thm:SmallBallProbab-Both}. By \cite{frizhairer2014}*{Theorem 10.9}, Assumption \ref{assumption:GaussianRegularity} is satisfied. Hence, Theorem \ref{Thm:SmallBallProbab} implies
$$
\fB(\varepsilon) \lesssim \varepsilon^{\tfrac{-1}{\tfrac{1}{2\varrho} - \alpha}}.
$$
On the other hand, Equation \eqref{eq:Thm:SmallBallProbab-Both} implies Equation \eqref{eq:Thm:SmallBallProbab-Lower-1}. The additional assumption that $\tfrac{c_2 \cdot 2^{1/\varrho}}{c_1}<4$ implies Equation \eqref{eq:Thm:SmallBallProbab-Lower-2}. Under the final assumption of Equation \eqref{eq:Thm:SmallBallProbab-Lower-3}, the assumptions of Proposition \ref{pro:SmallBallProbab-Lower} are satisfied. Finally, using the identity $\| W \|_\alpha \lesssim \| \rw \|_\alpha$ gives
$$
\fB(\varepsilon) \gtrsim \varepsilon^{\tfrac{-1}{\tfrac{1}{2\varrho} - \alpha}}.
$$
\end{proof}
\subsection{Limitations and further progress}
Gaussian rough paths have been successfully studied when the regularity of the path $\alpha \in (\tfrac{1}{4}, \tfrac{1}{3}]$. However, we do not address this example in this paper.
Those familiar with the Philip-Hall Lie basis will realise that we will additionally need to account for the SBPs of terms of the form
$$
\int_s^t \int_s^r W_{s,q} \otimes dV_q \otimes dU_r , \quad \int_s^t \int_s^r W_{s,q} \otimes dW_q \otimes dV_r.
$$
where $W$, $V$ and $U$ are independent, identically distributed Gaussian processes. The first term can be address with another application of Proposition \ref{pro:Sidak}.
Let $(E_1, \cH_1, \cL_1)$, $(E_2, \cH_2, \cL_2)$ and $(E_3, \cH_3, \cL_3)$ be abstract Wiener spaces. The authors were able to demonstrate that when $\varepsilon$ is chosen to be small, for any sequence $h_i \in \cH_1 \otimes \cH_2 \otimes \cH_3$ and any choice of Gaussian measure $\cL$ over $E_1 \oplus E_2 \oplus E_3$ with marginals $\cL_1$, $\cL_2$ and $\cL_3$ satisfies
$$
\cL\Big[ \bigcap_{i} \big\{ | h_i(\hat{\otimes})|<\varepsilon \big\} \Big] \geq \Big( \cL_1 \times \cL_2 \times \cL_3\Big)\Big[ \bigcap_i \big\{ | h_i(\hat{\otimes})|<\varepsilon \big\} \Big]
$$
At face value, this would suggest the SBPs of terms of the form $\int_s^t \int_s^r W_{s,q} \otimes dW_q \otimes dV_r$ should be lower bounded by SBPs of terms of the form $\int_s^t \int_s^r W_{s,q} \otimes dV_q \otimes dU_r $. However, a key requirement is that $\varepsilon$ is chosen smaller that the variance of the functionals $h_i(\hat{\otimes})$ and for $\varepsilon$ large enough the inequality flips.
This is naturally justified by the observation that the intersection of a ball with a hyperbola (both with common centre) is convex when the radius of the ball is small, but for large radius the set is not convex (so that one cannot apply Equation \eqref{eq:GaussCorrConj}).
\newpage
\section{Metric entropy of Cameron-Martin balls}
\label{section:MetricEntropy}
This problem was first studied in \cite{Kuelbs1993} for Gaussian measures. While the law of a Gaussian rough path has many of the properties that Gaussian measures are known for, it is not itself a Gaussian so this result is not immediate.
\begin{definition}
Let $(E, d)$ be a metric space and let $K$ be a compact subset of $E$. We define the $d$-metric entropy of $K$ to be $\fH(\varepsilon, K):=\log( \fN(\varepsilon, K)) $ where
$$
\fN(\varepsilon, K):=\min\left\{ n\geq 1: \exists e_1, ..., e_n\in E, \bigcup_{j=1}^n \bB(e_j, \varepsilon) \supseteq K\right\}
$$
and $\bB(e_i, \varepsilon):=\{ e\in E: d(e, e_i)<\varepsilon\}$.
\end{definition}
Given a Gaussian measure $\cL^W$ with RKHS $\cH$ and unit ball $\cK$, let us consider the set of rough paths
\begin{equation}
\label{eq:RoughRKHSBall}
\rk:=\Big\{ \rh=S_2[h]: h\in \cK \Big\} \subset G\Omega_\alpha(\bR^d).
\end{equation}
We can easily show that this set is \emph{equicontinuous} as a path on $G^2(\bR^d)$ so by the Arzel\`a–Ascoli theorem, see for example \cite{friz2010multidimensional}*{Theorem 1.4}, it must be compact in the metric space $WG\Omega_\alpha(\bR^d)$. Hence $\fN_{d_\alpha} (\varepsilon, \rk)$ is finite.
\begin{theorem}
\label{thm:SmallBallMetricEnt1}
Let $\tfrac{1}{3} <\alpha<\tfrac{1}{2\varrho}$. Let that $\cL^W$ be a Gaussian measure and $\forall s, t\in [0,T]$, let $\bE\Big[ \big| W_{s, t} \big|^2 \Big] = \sigma^2\Big( \big| t-s \big| \Big)$.
Suppose that
\begin{enumerate}
\item $\exists h>0$ and $c_1, c_2>0$ such that $\frac{c_2 \cdot 2^{\tfrac{1}{\varrho}}}{c_1}<4$ and $\forall \tau \in [0,h)$, $\sigma^2(\tau)$ is convex and
$$
c_1 \cdot |\tau|^{\tfrac{1}{\varrho}} \leq \sigma^2 \Big( \tau \Big) \leq c_2 \cdot | \tau |^{\tfrac{1}{\varrho}}.
$$
\item $\exists c_3>0$ such that $\forall \tau \in [0,h)$,
$$
\Big| \nabla^3 \big[ \sigma^2 \big] \Big( \tau \Big) \Big| \leq c_3 \cdot \tau^{\tfrac{1}{\varrho}-3}.
$$
\end{enumerate}
Then the metric entropy of the set $\rk$ with respect to the H\"older metric satisfies
$$
\fH_{d_\alpha}( \varepsilon, \rk) \approx \varepsilon^{\tfrac{-1}{\tfrac{1}{2}+\tfrac{1}{2\varrho} - \alpha}}.
$$
\end{theorem}
\begin{remark}
The maps $S_2: C^{1-var}([0,T]; \bR^d) \to G\Omega_\alpha(\bR^d)$ and $W \mapsto \rw$ are known to be measurable but not continuous. Therefore it is reasonably remarkable that this mapping takes a compact set to a compact set and that the two sets have the same metric entropy.
\end{remark}
\subsection{Proof of Theorem \ref{thm:SmallBallMetricEnt1}}
In order to prove this, we first prove the following auxiliary result.
\begin{proposition}
\label{pro:MetricEnt1}
Let $\cL^W$ be a Gaussian measure with RKHS $\cH$ satisfying Assumption \ref{assumption:GaussianRegularity}. Then for any $\eta,\varepsilon>0$,
\begin{equation}
\label{eq:MetricEnt1.1}
\fH_{d_\alpha} \Big( 2\varepsilon, \delta_\eta (\rk) \Big) \leq \tfrac{\eta^2}{2} - \log\Big( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon)\Big]\Big),
\end{equation}
and
\begin{equation}
\label{eq:MetricEnt1.2}
\fH_{d_\alpha} \Big(\varepsilon, \delta_\eta (\rk) \Big) \geq \log\Bigg( \Phi\bigg( \eta + \Phi^{-1}\Big( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon)\Big]\Big)\bigg)\Bigg) - \log\Big( \cL^\rw\Big[\bB_\alpha (\rId, 2\varepsilon)\Big]\Big).
\end{equation}
\end{proposition}
\begin{proof}
Firstly, for some $\varepsilon>0$ consider the quantity
$$
\fM_{d_\alpha} (\varepsilon, \delta_\eta (\rk) ) = \max \Big\{ n\geq 1: \exists \rh_1, ..., \rh_n\in \delta_\eta(\rk) , d_\alpha( \rh_i, \rh_j)\geq 2\varepsilon \quad \forall i\neq j\Big\},
$$
and a set $\fF$ such that $| \fF|= \fM_{\delta_\alpha}(\varepsilon, \delta_\eta (\rk))$ and for any two distinct $\rh_1, \rh_2\in \fF$ that $d_\alpha(\rh_1, \rh_2)\geq 2\varepsilon$. Similarly, there must exist a set $\fG$ such that $| \fG| = \fN_{d_\alpha}(\varepsilon, \delta_\eta(\rk))$ and
$$
\delta_\eta( \rk) \subseteq \bigcup_{\rh\in \fG} \bB_\alpha( \rh, \varepsilon).
$$
Similarly, since $\fF$ is a maximal set, we also have
$$
\delta_\eta( \rk) \subseteq \bigcup_{\rh\in \fF} \bB_\alpha( \rh, 2\varepsilon),
$$
It is therefore natural that
$$
\fH\Big(2\varepsilon, \delta_\eta(\rk)\Big) \leq \log\Big( \fM_{d_\alpha}( \varepsilon, \delta_{\eta}(\rk))\Big)
\quad \mbox{ and } \quad
\fM_{d_\alpha} (\varepsilon, \delta_\eta (\rk) ) \min_{\rh \in \fF} \cL^\rw \Big[ \bB_\alpha ( \rh, \varepsilon) \Big] \leq 1.
$$
By taking logarithms and applying Lemma \ref{lem:CamMartinFormRP} we get
$$
\log\Big( \fM_{d_\alpha} ( \varepsilon, \delta_\eta(\rk) ) \Big) - \tfrac{\eta^2}{2} + \log\Big( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon)\Big]\Big) \leq 0,
$$
which implies \eqref{eq:MetricEnt1.1}.
Secondly, from the definition of $\fF$ we get that
$$
T \Big( \bB_\alpha(\rId, \varepsilon), \delta_\eta(\rk)\Big) \subseteq \bigcup_{\rh\in \fG} \bB_\alpha(\rh, 2\varepsilon),
$$
where
\begin{equation}
\label{eq:Ball+shift}
T \Big( \bB_\alpha(\rId, \varepsilon), \delta_\eta(\rk)\Big) := \Big\{ T^{\eta h}(\rx): \rx \in \bB_\alpha(\rId, \varepsilon), h\in \cK\Big\}.
\end{equation}
Hence applying Lemma \ref{lem:AndersonInequalityRP} and Lemma \ref{lem:BorellInequalRP} gives
\begin{align*}
\fN_{d_\alpha} (\varepsilon, \delta_\eta(\rk)) \cdot \cL^\rw\Big[ \bB_\alpha(\rId, 2\varepsilon)\Big]
\geq& \fN_{d_\alpha} (\varepsilon, \delta_\eta(\rk)) \cdot \max_{\rh\in \fG} \cL^\rw\Big[ \bB_\alpha( \rh, \varepsilon)\Big],
\\
\geq& \cL^\rw \Big[ T \Big( \bB_\alpha(\rId, \varepsilon), \delta_\eta(\rk)\Big) \Big] \geq \Phi\bigg( \eta + \Phi^{-1}\bigg( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon)\Big]\bigg) \bigg),
\end{align*}
and taking logarithms yields \eqref{eq:MetricEnt1.2}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:SmallBallMetricEnt1}]
Using Equation \ref{eq:MetricEnt1.1} for $\eta,\varepsilon>0$ we have
$$
\fH_{d_\alpha}( 2\varepsilon, \delta_\eta( \rk) ) \leq \tfrac{\eta^2}{2} + \fB(\varepsilon).
$$
By the properties of the dilation operator it follows that $\fH_{d_\alpha}(\varepsilon, \delta_\eta(\rk)) = \fH_{d_\alpha}(\varepsilon/\eta, \rk)$. Making the substitution $\eta = \sqrt{ 2\fB(\varepsilon)}$ and using that $\fB$ is regularly varying at infinity leads to
$$
\fH_{d_\alpha} \Big( \tfrac{2\varepsilon}{\sqrt{2 \fB(\varepsilon)}}, \rk \Big) \leq 2 \fB(\varepsilon).
$$
Finally, relabeling $\varepsilon' = \tfrac{2 \varepsilon}{\sqrt{2\fB(\varepsilon)}}$ which means $\varepsilon' \approx \sqrt{2} \varepsilon^{\frac{\beta+1/2}{\beta} }$ and $\beta =\tfrac{1}{2\varrho}-\alpha$, we apply Theorem \ref{Thm:SmallBallProbab} to obtain
$$
\fH_{d_\alpha} \Big( \varepsilon', \rk \Big) \leq \Big(\tfrac{2\varepsilon}{\varepsilon'}\Big)^2 \lesssim \varepsilon^{\frac{-1}{\tfrac{1}{2} +\beta} }.
$$
For the second inequality, for $\eta,\varepsilon>0$, we use Equation \eqref{eq:MetricEnt1.2} with the substitution
$$
-\eta = \Phi^{-1}\big( \cL^\rw[ \bB_\alpha(\rId, \varepsilon)]\big).
$$
This yields
$$
\fH_{d_\alpha}\Big( \tfrac{\varepsilon}{\Phi^{-1}\big( \cL^\rw[ \bB_\alpha(\rId, \varepsilon)]\big)}, \rk\Big) \geq \fB(2\varepsilon) + \log(1/2).
$$
Next, using the known limit
$$
\lim_{x\to +\infty} \frac{-\Phi^{-1}(\exp(-x^2/2))}{x} = 1,
$$
we equivalently have that
$$
\frac{\Phi^{-1}\Big(\exp\big(-\fB(\varepsilon)\big)\Big)^2 }{2} \sim \fB(\varepsilon),
$$
as $\varepsilon\to0$ since $\fB(\varepsilon)\to 0$. From here we conclude that as $\varepsilon\searrow 0$ we have
$$
\frac{\varepsilon}{\Phi^{-1}\big( \cL^\rw[ \bB_\alpha(\rId, \varepsilon)]\big)} \sim \frac{\varepsilon}{\sqrt{2\fB(\varepsilon)} }.
$$
Therefore, for $\varepsilon$ small enough and using that $\fB$ varies regularly, we obtain
$$
\fH_{d_\alpha}\Big( \tfrac{\varepsilon}{\sqrt{2 \fB(\varepsilon)}}, \rk \Big) \gtrsim \fB(2\varepsilon) + \log(1/2) \gtrsim \frac{\fB( \varepsilon)}{2} .
$$
We conclude by making the substitution $\varepsilon' = \tfrac{\varepsilon}{\sqrt{2\fB(\varepsilon)}}$ and apply Theorem \ref{Thm:SmallBallProbab-Both}.
\end{proof}
\newpage
\section{Optimal quantization and empirical distributions}
\label{section:OptimalQuant}
In this section, we prove the link between metric entropy and optimal quantization and solve the asymptotic rate of convergence for the quantization problem of a Gaussian rough path.
\subsection{Introduction to finite support measures}
For a neat introduction to Quantization, see \cite{graf2007foundations}.
\begin{definition}
Let $(E, d)$ be a separable metric space endowed with the Borel $\sigma$-algebra $\cB$. For $r\geq1$, we denote $\cP_r(E)$ to be the space of integrable measures over the measure space $(E, \cB)$ with finite $r^{th}$ moments. For $\mu, \nu\in \cP_r(E)$, we denote the Wasserstein distance $\bW_{d}^{(r)}: \cP_r(E) \times \cP_r(E) \to \bR^+$ to be
$$
\bW^{(r)}_{d}(\mu, \nu) = \inf_{\gamma \in \cP(E \times E)} \bigg( \int_{E \times E} d(x, y)^r \gamma(dx, dy) \bigg)^{\tfrac{1}{r}}
$$
where $\gamma$ is a joint distribution over $E\times E$ which has marginals $\mu$ and $\nu$.
\end{definition}
The Wasserstein distance induces the topology of weak convergence of measure as well as convergence in moments of order up to and including $2$.
\begin{definition}
Let $I$ be a countable index, let $\fS:=\{ \fs_i, i\in I\}$ be a partition of $E$ and let $\fC:=\{\fc_i\in E, i\in I\}$ be a codebook.
Define $\fQ$ be the set of all quantizations $q:E \to E$ such that
\begin{align*}
q(x) = \fc_i &\quad \mbox{for $x\in \fs_i$}, \qquad q(E) = \fC
\end{align*}
for any possible $\fS$ and $\fC$. Then the pushforward measure by the function $q$ is
$$
\cL \circ q^{-1}(\cdot) = \sum_{i\in I} \cL(\fs_i) \delta_{\fc_i} (\cdot) \in \cP_2(E).
$$
\end{definition}
\begin{definition}[Optimal quantizers]
\label{dfn:OptimalQuantizer}
Let $n\in \bN$ and $r\in[1, \infty)$. The minimal $n^{th}$ quantization error of order $r$ of a measure $\cL$ on a separable metric space $E$ is defined to be
$$
\fE_{n, r}(\cL) = \inf\Bigg\{ \Big( \int_E \min_{\fc\in \fC} d( x, \fc)^r d\cL(x) \Big)^{\tfrac{1}{r}}: \fC\subset E, 1\leq |\fC| \leq n\Bigg\}.
$$
A codebook $\fC = \{\fc_i, i\in I\}$ with $1\leq|\fC|\leq n$ is called an $n$-optimal set of centres of $\cL$ (of order r) if
$$
\fE_{n, r}(\cL) = \Big( \int_E \min_{i=1, ..., n} d(x, \fc_i)^r d\cL(x) \Big)^{\tfrac{1}{r}}
$$
\end{definition}
\begin{remark}
Suppose that one has found an $n$-optimal set of centres for a measure $\cL$. Then an optimal partition can be obtained for any collection of sets $\fs_i$ such that
$$
\fs_i \subset \{ x \in E: d(x, \fc_i) \leq d(x, \fc_j), j=1, ..., n\},
\quad
\fs_i \cap \fs_j = \emptyset, i\neq j
$$
and $\cup_{i=1}^n \fs_i = E$.
Frequently, this means that the partition is taken to be the interior of the collection of Voronoi sets with centres equal to the $n$-optimal codebook plus additions that account for the boundaries between the Voronoi sets.
\end{remark}
\subsection{Optimal Quantization}
This next result follows the ideals of \cite{graf2003functional}, although a similar result proved using a different method can be found in \cite{dereich2003link}.
\begin{theorem}
\label{thm:QuantizationRateCon}
Let $\cL^W$ be a Gaussian measure satisfying Assumption \ref{assumption:GaussianRegularity} and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Then for any $1\leq r <\infty$
\begin{equation}
\label{eq:QuantizationRateCon}
\fB^{-1}\Big( \log(2n)\Big) \lesssim \fE_{n, r}(\cL^\rw)
\end{equation}
where $\fB$ is the SBP of the measure $\cL^\rw$.
\end{theorem}
In particular, if we additionally have that
$$
\lim_{\varepsilon \to 0} \frac{\cL^\rw[ \bB(\rId, \tfrac{\varepsilon}{2})]}{\cL^\rw[ \bB(\rId, \varepsilon)]} = 0
$$
then
$$
\fB^{-1}\Big( \log(n)\Big) \lesssim \fE_{n, r}(\cL^\rw).
$$
\begin{proof}
Let the set $\fC_{n, r} \subset G\Omega_\alpha(\bR^d)$ be a codebook containing $n$ elements . We know that the function $\min_{\fc \in \fC_n} d_{\alpha}(\rx, \fc)^r$ will be small in the vicinity of $\fC_{n, r}$, so we focus on when it is large. Thus
\begin{align*}
\int& \min_{\fc \in \fC_n} d_{\alpha}(\rx, \fc)^r d\cL^\rw(\rx)
\geq
\int_{\Big( \bigcup_{\fc\in \fC_{n,r}} \bB_\alpha \Big(\fc, \fB^{-1}(\log(2n))\Big) \Big)^c} \min_{\fc \in \fC_n} d_{\alpha}(\rx, \fc)^r d\cL^\rw(\rx),
\\
\geq& \fB^{-1}\Big(\log(2n)\Big)^r \left( 1- \cL^\rw\Big[ \bigcup_{\fc\in \fC_{n, r}} \bB_\alpha \Big(\fc, \fB^{-1}(\log(2n)) \Big) \Big] \right),
\\
\geq& \fB^{-1}\Big(\log(2n)\Big)^r \left( 1- n\cL^\rw\Big[ \bB_\alpha \Big(\rId, \fB^{-1}(\log(2n)) \Big) \Big] \right)
\geq \frac{\fB^{-1}\Big(\log(2n)\Big)^r}{2}
\end{align*}
by applying Lemma \ref{lem:AndersonInequalityRP}. Now taking a minimum over all possible choices of codebooks, we get
$$
\fE_{n, r}(\cL^\rw)^r \gtrsim \fB^{-1}\Big(\log(2n)\Big)^r.
$$
\end{proof}
\subsection{Convergence of weighted empirical measure}
We now turn our attention to the problem of sampling and the rate of convergence of empirical measures. In general, the quantization problem is only theoretical as obtaining the codebook and partition that attain the minimal quantization error is computationally more complex than beneficial. An empirical distribution removes this challenge at the sacrifice of optimality and the low probability event that the approximation will be far in the Wasserstein distance from the true distribution.
\begin{definition}
\label{dfn:WeightedEmpiricalMeasure}
For an enhanced Gaussian measure $\cL^\rw$, let $(\Omega, \cF, \bP)$ be a probability space containing $n$ independent, identically distributed enhanced Gaussian processes $(\rw^i)_{i=1, ..., n}$. Let $\fs_i$ be a Voronoi partition of $WG\Omega_{\alpha}(\bR^d)$,
$$
\fs_i \subset \Big\{ \rx\in WG\Omega_{\alpha}(\bR^d): d_\alpha( \rx, \rw^i) = \min_{j=1, ..., n} d_\alpha( \rx, \rw^i)\Big\},
\quad
\fs_i \cap \fs_j=\emptyset
$$
and $\bigcup_{i=1}^n \fs_i = WG\Omega_\alpha (\bR^d)$.
Then we define the \emph{weighted empirical measure} to be the random variable $\cM: \Omega \to \cP_2\big( WG\Omega_{\alpha}(\bR^d)\big)$
\begin{equation}
\label{eq:dfn:WeightedEmpiricalMeasure}
\cM_n = \sum_{i=1}^n \cL^\rw(\fs_i) \delta_{\rw^i}.
\end{equation}
\end{definition}
Note that the quantities $\cL^\rw(\fs_i)$ are random and $\sum_{i=1}^n \cL^\rw(\fs_i)=1$. The weights are in general NOT uniform. We think of $\cM_n$ as a (random) approximation of the measure $\cL^\rw$ and in this section we study the random variable $\bW_{d_\alpha}^{(2)} (\cM_n, \cL^\rw)$ and its mean square convergence to 0 as $n\to \infty$.
This next Theorem is an adaption of the method found in \cite{dereich2003link}.
\begin{theorem}
\label{thm:EmpiricalRateCon1}
Let $\cL^W$ be a Gaussian measure satisfying Assumption \ref{assumption:GaussianRegularity} and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Let $(\Omega, \cF, \bP)$ be a probability space containing a sequence of independent, identically distributed Gaussian rough paths with law $\cL^\rw$. Let $\cM_n$ be the empirical measure with samples drawn from the measure $\cL^\rw$. Then for any $1\leq r <\infty$
\begin{equation}
\label{eq:thm:EmpiricalRateCon1}
\bE\left[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \cM_n\Big)^r \right]^{1/r} \lesssim \fB^{-1}\Big( \log(n)\Big).
\end{equation}
where $\fB$ is the SBP of the measure $\cL^\rw$.
\end{theorem}
\begin{proof}
By the definition of the Wasserstein distance, we have
\begin{align*}
\bW_{d_\alpha}^{(r)}\Big( \cL^\rw, \cM_n\Big)^r =& \inf_{\gamma \in \cP\big(WG\Omega_\alpha(\bR^d) \times WG\Omega_\alpha(\bR^d)\big)} \int_{WG\Omega_\alpha(\bR^d) \times WG\Omega_\alpha(\bR^d)} d_\alpha( \rx, \ry)^r \gamma(d\rx, d\ry)
\\
\leq& \int_{WG\Omega_\alpha(\bR^d)} \min_{j=1, ..., n} d_\alpha\Big(\rx, \rw^j\Big)^r d\cL^{\rw}(\rx)
\end{align*}
Thus taking expectations, we have
\begin{align*}
&\bE\left[ \bW_{d_\alpha}^{(r)} \Big( \cL^\rw, \cM_n\Big)^r \right]
\\
&\leq \int_{WG\Omega_{\alpha}(\bR^d)^{\times n}} \left( \int_{WG\Omega_{\alpha}(\bR^d)} \min_{j=1, ..., n} d_\alpha\Big( \rx, \rw^j\Big)^r d\cL^\rw(\rx) \right)d\left( \cL^\rw\right)^{\times n} (\rw^1, ..., \rw^n)
\end{align*}
A change in the order of integration yields
\begin{equation}
\label{eq:EmpiricalRateCon1.1}
\bE\left[ \bW_{d_\alpha}^{(r)} \Big( \cL^\rw, \cM_n\Big)^r \right]
\leq
2^r \int_0^\infty \int_{WG\Omega_{\alpha}(\bR^d)} \left( 1-\cL^\rw\Big[ \bB_\alpha( \rx, 2\varepsilon^{1/r})\Big] \right)^n d\cL^\rw(\rx) d\varepsilon.
\end{equation}
Firstly, choose $n$ large enough so that for any $0<\varepsilon<1$, $\sqrt{ \log(n) } > \Phi^{-1}\Big( \cL^\rw \Big[ \bB_\alpha(1, \varepsilon^{1/r})\Big] \Big)$. Secondly, choose $c>0$ such that $\cL^\rw\Big[ \bB_\alpha(1, c^{1/r}) \Big] \leq \Phi\Big(\tfrac{-1}{\sqrt{2\pi}} \Big)$.
For $n$ and $\varepsilon$ fixed, we label the set
$$
A_{\varepsilon, n}:= \left\{ \rx\in WG\Omega_{\alpha}(\bR^d): I(\rx, \varepsilon) \leq \frac{\left( \tfrac{\sqrt{\log(n)}}{3} - \Phi^{-1}\Big( \cL^\rw\big[ \bB_\alpha(1, \varepsilon^{1/r})\big] \Big)\right)^2}{2} \right\}
$$
where $I(\rx, \varepsilon)$ was introduced in Definition \ref{definition:Freidlin-Wentzell_Function}.
This can equivalently be written as
$$
A_{\varepsilon, n}:= \left\{ T^h[\rx]\in WG\Omega_{\alpha}(\bR^d): h\in \left( \tfrac{\sqrt{\log(n)}}{3} - \Phi^{-1} \Big( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon^{1/r}) \Big]\Big)\right)\cK, \quad \rx\in \bB_\alpha(\rId, \varepsilon) \right\}
$$
Then we divide the integral in Equation \ref{eq:EmpiricalRateCon1.1} into
\begin{align}
\label{eq:EmpiricalRateCon2.1}
\bE\left[ \bW_{d_\alpha}^{(r)} \Big( \cL^\rw, \cM_n\Big)^r \right]
\leq&
2^r \int_0^c \int_{A_{\varepsilon, n}} \left( 1-\cL^\rw\Big[ \bB_\alpha( \rx, 2\varepsilon^{1/r})\Big] \right)^n d\cL^\rw(\rx) d\varepsilon
\\
\label{eq:EmpiricalRateCon2.2}
&+2^r \int_0^c \int_{A_{\varepsilon, n}^c} \left( 1-\cL^\rw\Big[ \bB_\alpha( \rx, 2\varepsilon^{1/r})\Big] \right)^n d\cL^\rw(\rx) d\varepsilon
\\
\label{eq:EmpiricalRateCon2.3}
&+2^r \int_c^\infty \int_{ WG\Omega_{\alpha}(\bR^d) } \left( 1-\cL^\rw\Big[ \bB_\alpha( \rx, 2\varepsilon^{1/r})\Big] \right)^n d\cL^\rw(\rx) d\varepsilon.
\end{align}
Firstly, using Corollary \ref{cor:CamMartinFormRP}
\begin{align*}
\eqref{eq:EmpiricalRateCon2.1} \leq& 2^r\int_0^c \int_{A_{\varepsilon, n}} \left( 1- \exp\Big( - I(\rx, \varepsilon) - \fB(\varepsilon^{1/r})\Big) \right) d\cL^\rw (\rx) d\varepsilon
\\
\leq& 2^r \int_0^c \left( 1- \exp\left( -\Big( \tfrac{\sqrt{\log(n)}}{3} + \sqrt{2 \fB(\varepsilon^{1/r}) } \Big)^2 \right) \right)^n d\varepsilon
\\
\leq& 2^r\int_0^{\fB^{-1}\Big(\tfrac{\log(n)}{8}\Big)^r} d\varepsilon + 2^r \int_{\fB^{-1}\Big(\tfrac{\log(n)}{8}\Big)^r}^{ \fB^{-1}\Big(\tfrac{\log(n)}{8}\Big)^r \vee c} \left( 1- \exp\left( -\Big( \tfrac{\sqrt{\log(n)}}{3} + \sqrt{2 \fB(\varepsilon^{1/r}) } \Big)^2 \right)\right)^n d\varepsilon
\\
\leq& 2^r \fB^{-1}\Big(\tfrac{\log(n)}{8}\Big)^r + 2^r \left( 1- \exp\left( -\frac{25 \log(n)}{36} \right)\right)^n
\end{align*}
since $c<1$.
Now, since $\log\Big( \tfrac{1}{\varepsilon}\Big) = o\Big( \fB(\varepsilon)\Big)$ as $\varepsilon \to 0$, we have that $\forall p, q$ that $\exp\Big( \tfrac{-n}{p}\Big) = o\Big( \fB^{-1}(n)^q \Big)$. Therefore
\begin{align*}
\left( 1- \exp\left( -\frac{25 \log(n)}{36} \right)\right)^n =& \Big( 1 - \frac{1}{n^{25/36} } \Big)^n
\\
\leq& \exp\Big(-n^{11/36}\Big) = o\left( \exp\Big( \frac{-\log(n)}{2}\Big) \right).
\end{align*}
Next, applying Lemma \ref{lem:BorellInequalRP} to
\begin{align*}
\cL^\rw \Big[ A_{\varepsilon, n}^c \Big] =& 1 - \cL^\rw \Big[ A_{\varepsilon, N} \Big]
\\
\leq& 1 - \Phi\left( \Phi^{-1}\Big( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon^{1/r}) \Big]\Big) + \frac{\sqrt{\log(n)}}{3} - \Phi^{-1}\Big( \cL^\rw\Big[ \bB_\alpha(\rId, \varepsilon^{1/r}) \Big]\Big) \right)
\\
=& 1 - \Phi\left( \frac{\sqrt{\log(n)}}{3} \right) \leq \exp\left( - \frac{\log(n)}{18} \right).
\end{align*}
Third and finally, we make the substitution
\begin{align}
\nonumber
\eqref{eq:EmpiricalRateCon2.3} \leq& 2^r \int_{ WG\Omega_{\alpha}(\bR^d) } \int_0^\infty \cL^\rw\Big[ \bB^c( \rx, 2\varepsilon^{1/r})\Big] d\varepsilon \cdot \cL^\rw\Big[ \bB^c( \rx, 2 c^{1/r})\Big] ^{n-1} d\cL^\rw(\rx)
\\
\label{eq:EmpiricalRateCon3.1}
\leq& \bE\Big[ \| \rw\|_\alpha^r \Big] \int_{ WG\Omega_{\alpha}(\bR^d) } \cL^\rw\Big[ \bB^c( \rx, 2 c^{1/r})\Big] ^{n-1} d\cL^\rw(\rx)
\\
\label{eq:EmpiricalRateCon3.2}
&+ \int_{ WG\Omega_{\alpha}(\bR^d) } \| \rx \|_{\alpha}^r \cdot \cL^\rw\Big[ \bB^c( \rx, 2 c^{1/r})\Big] ^{n-1} d\cL^\rw(\rx)
\end{align}
to account for the integral over $(c, \infty)$. Next, we partition this integral over $A_{c, n}$ and $A_{c, n}^c$. Arguing as before, we have
\begin{align*}
\int_{ A_{c, n} }& \left( 1 - \cL^\rw\Big[ \bB_\alpha( \rx, 2 c^{1/r})\Big] \right)^{n-1} d\cL^\rw(\rx)
\\
&\leq \left( 1 - \exp\Big( - \frac{25 \log(n)}{36} \Big) \right)^{n-1} \leq o\Big( \fB^{-1} \Big(\log(n)\Big)^q \Big)
\end{align*}
and
\begin{align*}
\int_{ A_{c, n}^c }& \left( 1 - \cL^\rw\Big[ \bB_\alpha( \rx, 2 c^{1/r})\Big] \right)^{n-1} d\cL^\rw(\rx)
\\
&\leq \sup_{0<c} \cL^\rw \Big[ A_{c, n}^c\Big] \leq o\Big( \fB^{-1}\Big( \log(n)\Big)^q \Big).
\end{align*}
for any choice of $q$. Therefore
\begin{align*}
\eqref{eq:EmpiricalRateCon3.1} \leq& o\Big( \fB^{-1}\Big( \log(n)\Big)^q \Big)
\\
\eqref{eq:EmpiricalRateCon3.2} \leq& \bE\Big[ \| \rw \|_\alpha^{2r} \Big]^{1/2} \cdot \left( \int_{ WG\Omega_{\alpha}(\bR^d) } \cL^\rw \Big[ \bB_\alpha^c( \rx, 2c^{1/r}) \Big]^{2(n-1)} d\cL^\rw(\rx) \right)^{1/2}
\\
\leq& o\Big( \fB^{-1}\Big( \log(n)\Big)^q \Big)
\end{align*}
Thus
$$
\eqref{eq:EmpiricalRateCon1.1} \leq 2^r \fB^{-1}\Big(\tfrac{\log(n)}{8}\Big)^r + o\Big( \fB^{-1}\Big( \log(n)\Big)^r \Big).
$$
\end{proof}
\subsection{Convergence of (non-weighted) empirical measure}
In this Section, we study the empirical measure with uniform weights. This is the form that the empirical distribution more commonly takes.
\begin{definition}
\label{dfn:EmpiricalMeasure}
For enhanced Gaussian measure $\cL^\rw$, let $(\Omega, \cF, \bP)$ be a probability space containing $n$ independent, identically ditributed enhanced Gaussian processes $(\rw^i)_{i=1, ..., n}$.
Then we define the empirical measure to be the random variable $\cE_n: \Omega \to \cP_2\big( WG\Omega_{\alpha}(\bR^d)\big)$
\begin{equation}
\label{eq:dfn:EmpiricalMeasure}
\cE_n = \frac{1}{n} \sum_{i=1}^n \delta_{\rw^i}.
\end{equation}
\end{definition}
Before, we state our Theorem for the rate of convergence of empirical measures, we will need the following Lemma.
\begin{lemma}
\label{lemma:Boissard}
Let $\eta>0$. Let $(E, d)$ be a metric space and let $\mu \in \cP(E)$ with $\supp(\mu) \subset E$ such that $\fN(t, \supp(\mu)) < \infty$ and let
$$
\Delta_\mu:= \max \Big\{ d(x, y): x, y \in \supp(\mu) \Big\}.
$$
Then $\exists c>0$ such that
$$
\bE\Big[ \bW_{d_\alpha}^{(r)}( \cE_n, \mu) \Big] \leq c \bigg( \eta + n^{-1/2r} \int_\eta^{\Delta_\mu/4} \fN\Big( z, \supp(\mu) \Big) dz \bigg).
$$
\end{lemma}
For a proof of Lemma \ref{lemma:Boissard}, see \cite{boissard2014mean}.
\begin{theorem}
\label{thm:EmpiricalRateCon2}
Let $\cL^W$ be a Gaussian measure satisfying Assumption \ref{assumption:GaussianRegularity} and let $\cL^\rw$ be the law of the lift to the Gaussian rough path. Let $(\Omega, \cF, \bP)$ be a probability space containing a sequence of independent, identically distributed Gaussian rough paths with law $\cL^\rw$. Let $\cE_n$ be the empirical measure with samples drawn from the measure $\cL^\rw$. Then for any $1\leq r <\infty$
\begin{equation}
\label{eq:thm:EmpiricalRateCon2}
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \cE_n\Big)^r \bigg]
\lesssim
\fB^{-1}\Big( \log(n)\Big).
\end{equation}
\end{theorem}
Typically, the measure $\cE_n$ is easier to work with since one does not have to calculate the (random) weights associated to each sample. However, it is a less accurate approximation than the weighted empirical so the fact that it converges at the same rate is noteworthy.
\begin{proof}
Using the same construction as in Equation \eqref{eq:Ball+shift}, we consider the set
$$
\rS:= T\Big( \bB_\alpha(\rId, \varepsilon), \delta_\lambda(\rk) \Big)
$$
and denote the conditional measure $\tilde{\cL}^\rw = \tfrac{1}{\cL^\rw[\rS]} \1_{\rS} \cL^\rw$. An application of Lemma \ref{lemma:Boissard} gives
\begin{align}
\nonumber
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \cE_n\Big)^r \bigg]^{1/r}
\leq&
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \tilde{\cL}^\rw \Big)^r \bigg]^{1/r} +
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cE_n, \tilde{\cL}^\rw \Big)^r \bigg]^{1/r}
\\
\label{eq:thm:EmpiricalRateCon2-1}
\leq& \bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \tilde{\cL}^\rw \Big)^r \bigg]^{1/r}
+
2c\varepsilon + cn^{-1/4} \int_{2\varepsilon}^{\tfrac{\sigma\lambda + \varepsilon}{2}} \fN(z, \rS)^{1/4} dz
\end{align}
where
$$
\sigma: = \sup_{s, t \in [0,T]} \frac{\bE\Big[ |W_{s, t}|^2 \Big]^{1/2}}{|t-s|^\alpha}.
$$
An application of the Talagrand inequality for rough paths (see for example \cite{riedel2017transportation} and Lemma \ref{lem:BorellInequalRP} gives that
$$
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \tilde{\cL}^\rw \Big)^r \bigg]^{1/r} \leq \sqrt{2\sigma^2} \sqrt{-\log\bigg( \Phi\Big( \lambda + \Phi^{-1}(e^{-\fB(\varepsilon)})\Big) \bigg) }.
$$
Next, using the same estimates for $\Phi$ as in the proof of Theorem \ref{thm:SmallBallMetricEnt1}, we get that
$$
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \tilde{\cL}^\rw \Big)^r \bigg]^{1/r} \leq
2\sigma \exp \bigg( \tfrac{-1}{4} \Big( \lambda - \sqrt{2 \fB(\varepsilon)} \Big)^2 \bigg)
$$
provided $\fB(\varepsilon)\geq \log(2)$ and $\lambda \geq \sqrt{2 \fB(\varepsilon)}$.
Secondly,
\begin{align*}
\frac{1}{n^{1/4}} \int_{2\varepsilon}^{\tfrac{\sigma\lambda + \varepsilon}{2}} \fN(z, \rS)^{1/4} dz
\leq&
\frac{\sigma\lambda + \varepsilon}{2n^{1/4}} \cdot \fN(2\varepsilon, \rS)^{1/4} \cdot
\leq \frac{\sigma \lambda + \varepsilon}{2 n^{1/4}} \cdot \fN( \varepsilon , \lambda \rk).
\end{align*}
By Proposition \ref{pro:MetricEnt1}, we have that
$$
\fN(\varepsilon, \lambda \rk) \leq \exp\Big( \tfrac{\lambda^2}{2} + \fB\big(\tfrac{\varepsilon}{2} \big) \Big) \leq \exp\Big( \tfrac{\lambda^2}{2} + \kappa\cdot \fB(\varepsilon) \Big).
$$
Now we choose
$$
\varepsilon = \fB^{-1}\Big( \tfrac{1}{6+\kappa} \cdot \log(n)\Big),
\quad
\lambda = 2\sqrt{\tfrac{2}{6+\kappa}\cdot \log(n)}
$$
and Equation \eqref{eq:thm:EmpiricalRateCon2-1} becomes
\begin{align*}
\bE\bigg[ \bW_{d_\alpha}^{(r)}\Big(\cL^\rw, \cE_n\Big)^r \bigg]^{1/r}
\leq
c\bigg( \fB^{-1}\Big( \tfrac{1}{6+\kappa} \cdot \log(n)\Big) + \Big( 1 + \sigma\sqrt{\tfrac{1}{6+\kappa} \log(n)} \Big) n^{\tfrac{-1}{12+2\kappa}} \bigg)
\end{align*}
provided $n$ is large enough. As $n\to \infty$, the lead term will be $\fB^{-1}\Big(\log(n)\Big)$.
\end{proof}
\subsection*{Acknowledgements}
The author is grateful to Professor Dr Peter Friz and Dr Yvain Bruned whose examination of the author's PhD thesis lead to several improvements in the mathematics on which this paper is based.
The author is also very grateful to Dr Gon\c calo dos Reis who supervised the PhD and allowed the author to pursue the author's own projects.
The author is grateful to the anonymous reviewer whose feedback lead to a number of improvements to the paper.
The author acknowledges with thanks the funding received from the University of Edinburgh during the course of the author's PhD.
\bibliographystyle{alpha}
\end{document} | 0.021697 |
Month: July 2010
Welcome New Readers!
First, … Read more
Rock Band 3 = Music Teacher?
The … Read more | 0.015999 |
Graduate Degrees
Master of Fine Arts
A two-year program administered in collaboration with the Graduate School, the Master of Fine Arts is a professional degree in the practice of art preparing students to pursue careers as professional artists. The opportunity to gain experience as a teaching assistant is available on a competitive basis. Applicants must hold a Bachelor of Fine Arts, or equivalent, from an accredited school. The intended area of primary interest must be indicated and the applicant must provide 20 images or videos of recent work. Transfer work applicable to the M.F.A. degree must have been completed within five years of the date of application. Supplemental applications are available at roski.usc.edu.
Supplemental applications and related materials should be sent directly to: Graduate Programs, Roski School of Fine Arts, Watt Hall 104, University of Southern California, University Park, Los Angeles, CA 90089-0292. Applicants wishing to have their portfolios returned should include a stamped, self-addressed envelope or mailing container. Studies
The Master of Public Art Studies program is a two-year program administered by the Roski School of Fine Arts and designed to meet the special training needs of individuals whose career goals are oriented toward professional work in public art. The long range objectives of the program are to provide students and professionals with the necessary skills, knowledge and experience to become successful administrators and problem solvers. The program is founded on the principle of using the facilities of the university both as a practical laboratory and as a catalyst for furthering dialogue, collaboration and research. The goal of the program is to build bridges between disciplines, the university and the community.
Admission RequirementsAdmission to the Public Art Studies program is granted through the USC Office of Graduate Admission, which receives and processes all applications, evaluates credentials and issues notification letters. The Roski School of Fine Arts establishes and monitors the standards under which students are admitted. Admission to the university under the standard of the Roski School of Fine Arts is determined by the Office of Graduate Admission on the recommendation of the Public Arts Studies program. The following are the basic requirements:
- A Bachelor of Arts or Bachelor of Fine Arts degree or its equivalent from an accredited college or university comparable in standards to that awarded at USC;
- A 3.0 overall GPA
- Three letters of recommendation
Thesis RequirementsA master's thesis committee comprises three members: the director of the program, the primary reader and a professional from the student's area of emphasis (administration, history, practice).
The thesis committee shall be established after the student completes the second semester's course work. The committee, after a comprehensive review of the candidate's past work and professional goals, will determine if the student is to be recommended for advancement.
Program RequirementsA minimum of 32 units, usually taken during a two-year period, is required, to be distributed as follows: | 0.946289 |
TITLE: Proof-Check : Let $R$ be a finite commutative ring with unity. Prove that every nonzero element of $R$ is either a unit or a zero-divisor.
QUESTION [0 upvotes]: Let $R$ be a finite commutative ring with unity. Prove that every nonzero element of $R$ is either a unit or a zero-divisor.
I know this question has a solution here Every nonzero element in a finite ring is either a unit or a zero divisor.
But I want you to check my proof.
Let $a \not = 0, a \in R$.
We have to prove that $a$ is either a unit or a zero divisor.
Let $a$ is a unit then we have to show that it is not a zero divisor.
1) $\exists x \in R$ such that $ax = 1.$
2) Let for some $b \in R$, $ab = 0$ is true.
Now $ax = 1$ $\implies ax + 0 = 1 \implies ax + ab = 1 \implies a(x+b) = 1 .$
$\therefore$ $x$ and $(x+b)$ both are multiplicative inverses of $a \implies x = x+b$
$\therefore$ $b = 0$ hence $a$ is not a zero divisor
In a similar way we can show that if $a$ is a zero divisor then it is not a unit.
a) Is my proof okay?
b) I am not using the "finite" and "comutative" conditions of $R$.
REPLY [3 votes]: No, the proof is not OK, as it has a fundamental problem: you've shown that there is no element that is both a unit and a zero divisor. Instead, you needed to show that there is no element that is neither a unit nor a zero divisor. To proceed in such a manner, you'd need to start by saying "Suppose $a$ is not a zero divisor", then somehow conclude that it's a unit.
The other thing to note is that "If $a$ is a zero divisor then it is not a unit" is logically equivalent to "If $a$ is a unit, then it is not a zero divisor". It is, in fact, the contrapositive of the statement. You don't have to prove both, or even say the other "can be proven similarly". | 0.04848 |
TITLE: how to explain prime numbers to children
QUESTION [9 upvotes]: My little cousin (12year) asked me about how emails are encrypted and I want to answers her in such a way she understands it. This is diffuct, but I am happy with teaching the definition of a prime number and a composite.
Is there a teacher here who knows how to explain it in such a way that I and make her understand how to factor a number in to primes and what a prime number is?
REPLY [1 votes]: Start listing the multiples of 2 and 3 and so on, marking them off on the number line, and say a prime is any number above 1 that can't be obtained by multiplying any two numbers except itself with 1 - that is, the numbers that start each different sequence like counting by 2s and 3s.
Given her age, I tried an introduction to the basic properties of modular arithmetic and how it gets into cryptography instead. I will not be held liable for any damages that may be incurred by trying to teach abstract algebra to a 12 year-old, however I'll be happy to assist with technical support. Note that computing remainders is standard elementary school curriculum in the U.S.:
Teach multiplication on a 12-hour clock first, til she gets the hang of it. Then point out that 3*4=0 is an unusual property for a number system. Then ask what the numbers are where a clock of that size will behave normally, and not have two numbers' product be 0 unless one of them is 0.
Now try the Caesar cipher. In the case of the Caesar cipher, to encrypt it you need to pick some number of letters to shift by, and that same number of letters is what you need to know to decrypt it. But if you want to encode a message with the same code each time, but send a message to different people, you need the information you need to decode it to be different from the information you need to encode it. But how do you build such a thing?
[The next two paragraphs could be skipped, or the explanation could end with the next paragraph.]
Give her an exponent $b$ and the last digit of $a^b$, and ask her to find $a$ by looking on the (2-1)*(5-1)-hour clock for a number $d$ such that $bd$ is $1$ on that clock. At this stage you'd have to write down the multiplication table. It will happen that in taking the last digits of $2^d, 3^d, ...$, one eventually gets a number that, magically, is $a$. Clearly the encoding process is not the same as the decoding process. But it's not at all clear why this works, so the long answer is to give a a bit deeper an understanding of how you make a clock.
Suppose John and Robert are the same person, and Robert and Tim are the same person. Then John and Tim are the same person. Now suppose you have three fruit. None of them are the same fruit, but you can use a similar structure to group the fruit together. If two of the fruits $a$ and $b$ are the same kind of fruit, and $b$ and the third fruit, $c$, are the same kind of fruit, then we assume all three fruit are the same kind of fruit.
Well, you can do the same thing with numbers, making even and odd numbers, for example. But there's something extra about even and odd numbers, because if even numbers are added together, you get an even number, two odd numbers get you an even number, and so on. Those rules weren't imposed, they were found out. What do they say? That if two numbers $a$ and $b$ are of the same type, then the sums $a+c$ and $b+c$ are of the same type, no matter what $c$ is. Incredibly special! Now, if you were just looking to find a list of all the different types of fruit, you would think of two apples as being the same thing, and ignore every apple beyond the first one you saw. So let's start from 0 and ignore all even and odd numbers beyond the first one we see. 0 is even, and 1 is odd, but then 2 is even again. So 1+1=2 becomes 1+1 is the same type as 0. 1+0 is the same type as 1, and so on. If you take the addition table for even and odd numbers, and the addition table for the 0 and 1, they're the same.
Now, on a number line, as an experiment, ask her to see what happens if all you do is say that 0 and 3 are the same type of number, and that if $a$ and $b$ are of the same type, then the sums $a+c$ and $b+c$ are of the same type, no matter what number $c$ is. You can color the numbers by their types, or just write the first number of that type underneath it. What you'll see is that every number gets one of three types, and the pattern is repeating. If you think of them as all the same thing when they're the same type, you'd just be coiling the number line up into a circle. The number of points on the circle is the number you chose to say was the same type as 0.
Now take the 12-hour clock, and impose the same rule about adding types, but pick a number not equal to 0, declare it to be the same type as 0, and see how the rules still assign everything a type. But how many types, exactly? | 0.779839 |
Route Map from Worcester, MA to Framingham, MA
Optimal route map between Worcester, MA and Framingham, MA. This route will be about 27 Miles.
The driving route information(distance, estimated time, directions), flight route, traffic information and print the map features are placed on the top right corner of the map.
Worcester, MA
Framingham, MA
* Weather information on route, provide by Open Weather Map.
* The total population living within the city limits, using the latest US census 2014 population estimates.
* The total number of households within the city limits using the latest 5 year estimates from the American Community Survey. | 0.136349 |
celebrity thongs exposed, careful mp3, avalon productions london. balljoint replacement case forensic saliva study? carl carmody baba dvd ramdev; business administration degree programs in utah... bike trailer cover; best karaoke songs for altos; best economey. bmp ddb agency, blue and white squares; cantonese now talk. call uk from usa for free... camel carnival: brake disk kit... bird foy home vance black leather woman arizona domestic violence shelters.
casino gambeling niagara falls, braunton cc: because of you lyrics by nickelback... attack real shark; black men haircuts 2008. bankruptcy full service, baked pierogies; camp nou adress... brosur pendaftaran; biliary hemartoma? auto detroit jeep photo, clavicle piercing, bodystep instructor. brent corrigan brad davis... baptist academy indiana! capitated versus fee for service reimbursement boy genius films; belt decals.
ayu ratna pratiwi buy caucasian ovcharka, bodegas salentein malbec... buy bicycle frame: biking home bradford park hoa. baking a potato in the oven; bad carb carb diet good; boring night. can t create file right click, blank burlap bags by meeting patrick lencioni. brianna bronze goddess... be a part of something amazing! back up data center blood pressure 14050; canada pension plan deduction tables. communication system design magazine, 3com 3c1722. | 0.004414 |
TITLE: Proving sequence limit given condition weaker than monotonicity
QUESTION [0 upvotes]: Let $(a_n)_{n \geq 1}$ be a sequence of positive numbers such that $a_{n+m} \leq a_n + a_m$ for all $m,n \geq 1$. (So the sequence is not necessarily monotonic, but from $a_n$ to $a_{n+1}$, it can increase by at most $a_1$). Let \begin{equation*}I = \inf\left\{\frac{a_n}{n}: n \geq 1\right\}\end{equation*} It is to be shown that in fact \begin{equation*}\lim_{n \to \infty}\frac{a_n}{n} = I\end{equation*}
I attempted to show that the sequence $\left(\frac{a_n}{n}\right)$ is monotonically decreasing, which would imply the result, but this in fact seems to be false in the general case. So any suggestions would be appreciated.
REPLY [1 votes]: My suggestion is to build the following two steps argument
First, note that for any $\epsilon$ it is always possible to find a sequence $\{ a_{N+kn} \}_{n}$ such that $\frac{a_{N+nk}}{N+nk} < I + \epsilon$. This follows by choosing $N$ and $k$ such that $a_N < NI + N \epsilon$ and $a_k < kI + k \epsilon$ and from the inequality $a_{n+m} \leq a_n + a_m$.
Then, note that for any $n$ and for $1 \leq i \leq k$ one has $a_{N+kn+i} - a_{N+kn} < i a_1$, therefore
$$
\frac{a_{N+kn+i}}{N+kn+i} \leq I + 2 \epsilon
$$
for all $i$ for any $n$ sufficiently large. | 0.068996 |
.
Using StimulusJS and nested forms, we create the first parts of a questionnaire. Dynamic surveys can be difficult to architect and maintain. In this episode, we take a simple approach to creating questionnaires..
Ruby and Ruby on Rails tricks from, dot files, operators, bundling, StimulusJS and more.
Using StimulusJS controllers, adding nested forms to a Rails application is easy and unobtrusive. In this episode, we look at an alternative way of creating nested forms without the Cocoon gem. | 0.013158 |
TITLE: Doubt on the $n$ distinct $n$th roots of a complex number
QUESTION [2 upvotes]: I know that, given $n\in\mathbb{N}$, for every $z\in\mathbb{C}$, $z\neq0$, there are exactly $n$ distinct $n$th roots of $z$.
To prove this: given $z\neq0$, a $n$th root of $z$ is a complex number $w$ such that $w^n=z$. By posing
$$
w = r(\cos\theta+i\sin\theta),
$$
we have $w^n=r^n(\cos n\theta+i\sin n\theta)$. So, in order to have $w^n=z$, it must be
$$
r = |z|,\qquad n\theta=\phi+2k\pi,\quad k\in\mathbb{Z},
$$
where $\phi$ is the argument of $z$, because two complex numbers are equal if and only if their modulus coincide and their arguments are the same up to a multiple of $2\pi$. Is it right?
My question: Why in some book, in order to have $w^n=z$, is $n\theta=\operatorname{Arg}z+2k\pi$, where $\operatorname{Arg}z$ is the principal value of the argument of $z$? Should $\phi$ be the argument (and not the principal value of the argument) of $z$?
Thank You
REPLY [2 votes]: There is no such thing as the argument: every complex number has infinitely many arguments. If you say that you are working with the principal value of the argument, then there is no ambiguity (although, in fact, any argument will do). | 0.017577 |