text
stringlengths 28
2.36M
| meta
stringlengths 20
188
|
---|---|
TITLE: How to evaluate these indefinite integrals with $\sqrt{1+x^4}$?
QUESTION [10 upvotes]: These integrals are supposed to have an elementary closed form, but Mathematica only returns something in terms of elliptic integrals. I got them from the book Treatise on Integral Calculus by Edwards. How can we evaluate them?
$$
I = \int{\frac{\sqrt{1+x^4}}{1-x^4}dx}\\
J = \int{\frac{x^2}{(1-x^4)\sqrt{1+x^4}}} dx
$$
REPLY [10 votes]: The problem is on p. 319 in the 1921 edition of Volume I. Write
$$
I = \int \frac{1+x^4}{(1-x^4)} \frac{dx}{\sqrt{1+x^4}}
,\qquad
J = \int \frac{x^2}{(1-x^4)} \frac{dx}{\sqrt{1+x^4}}
,
$$
and note that $I=\frac{1}{2}(A+B)$ and $J=\frac{1}{4}(A-B)$, where
$$
A = \int\frac{1+x^2}{1-x^2} \frac{dx}{\sqrt{1+x^4}}
,\qquad
B = \int\frac{1-x^2}{1+x^2} \frac{dx}{\sqrt{1+x^4}}
.
$$
These integrals $A$ and $B$ appear in an exercise on p. 103, and can be solved by setting $z=\frac{\sqrt{1+x^4}}{x}$ (for $x > 1$, say, so that the change of variables is invertible; the final result doesn't depend on this assumption, as can be checked by differentiating it). This gives
$$
\frac{dz}{dx} = \frac{(x^2+1)(x^2-1)}{x^2 \sqrt{1+x^4}}
,\quad
z^2 = x^2 + \frac{1}{x^2}
,\quad
z^2 \pm 4 = \left( x \pm \frac{1}{x} \right)^2 = \left( \frac{x^2 \pm 1}{x} \right)^2
,
$$
so that
$$
A = -\int \frac{dz}{z^2-2}
,\qquad
B = -\int \frac{dz}{z^2+2}
,
$$
and I think you can take it from there!
(Another option would be to do as Edwards suggests on p. 311, and evaluate $A$ and $B$ by letting $z=1/(x-\frac{1}{x})$ and $z=1/(x+\frac{1}{x})$, respectively.) | {"set_name": "stack_exchange", "score": 10, "question_id": 1041531} |
TITLE: Multiplicity of eigenvalues in 2-dim families of symmetric matrices
QUESTION [3 upvotes]: Say you have 2 symmetric matrices, $A$ and $B$, and you know that every linear combination $xA+yB$ ($x,\\,y\in \mathbb{R}$) has an eigenvalue of multiplicity at least $m>1$. Such a situation can of course be obtained if $A$, $B$ have a common eigenspace of multiplicity at least $m$.
My question is: is it the only possibility?
A way to proceed is the following: the characteristic polynomia of the generic matrix is $\det(xA+yB-tI)$, and its discriminant $\Delta$ (with respect to $t$) is a homogeneous polynomial in $x,y$ of degree $n^2-n$, where $n$ is the number of rows and columns in $A$ and $B$. Since every matrix in the family has some eigenvalue of multiplicity $>1$, the polynomial $\Delta$ vanishes identically, hance all the $n^2-n+1$ coefficients in $\Delta$ vanish. This gives $n^2-n+1$ polynomial conditions on the $n^2+n$ coefficients of $A$ and $B$, and this might help somehow.
Still, both finding these polynomial conditions and solving them, seems to be painful and extremely computational. Maybe there are better ways to proceed $\ldots$?
Thanks in advance!
REPLY [2 votes]: I think that the most interesting examples come from systems of conservation laws in physics. These are systems of PDEs (partial differential equations). When they are first-order and linear (linearity can be achived by linearizing about a constant state), you have a system
$$\partial_tU+\sum_{\alpha=1}^dA^\alpha U=f,$$
where $A^\alpha\in{\bf M}_n({\mathbb R})$. The space dimension is usually $d=3$ but may be smaller for waves propagating in specific directions. Exemples cover the Euler equations for gas dynamics, the Maxwell's equation for electro-magnetism, Magneto-hydrodynamics, elasticity and so on.
The symbol $A(\xi)=\sum_\alpha A^\alpha$ plays a fundamental role. Its eigenvalues are the wave velocities, times the modulus $|\xi|$. In several exemples, especially when the system has a group invariance, the eigenvalues have constant multiplicities, yet the eigenspace do vary with $\xi$. Let us take the example of Maxwell's equations in vacuum. Here $d=3$ and $n=6$. We have
$$A(\xi)=\begin{pmatrix} 0_3 & J(\xi) \\\\ -J(\xi) & 0_3 \end{pmatrix},$$
where $J(\xi)$ is the matrix of the cross product by $\xi$: $J(\xi)X=\xi\times X$.
The eigenvalues of $A(\xi)$ are $0$ and $\pm|\xi|$. None of the eigenvectors is constant. In particular, two matrices $A^\alpha$ don't have a common eigenspace. | {"set_name": "stack_exchange", "score": 3, "question_id": 69187} |
\begin{document}
\runningheads{D. Aristoff, T. Leli\`evre, and G. Simpson}{ParRep for
simulating Markov chains}
\title{The parallel replica method for simulating long trajectories of
Markov chains}
\author{David Aristoff\affil{a}\corrauth, Tony Leli\`evre\affil{b},
and Gideon Simpson\affil{c}}
\address{\affilnum{a}School of Mathematics, University of Minnesota\\
\affilnum{b}CERMICS, \'Ecole des Ponts ParisTech\\
\affilnum{c}Department of Mathematics, Drexel University}
\corraddr{daristof@umn.edu}
\begin{abstract}
The parallel replica dynamics, originally developed by
A.F. Voter, efficiently simulates very long trajectories of
metastable Langevin dynamics.
We present an analogous algorithm for discrete time
Markov processes. Such Markov processes naturally arise, for example,
from the time discretization of a continuous time stochastic dynamics.
Appealing to properties of quasistationary
distributions, we show that our algorithm
reproduces exactly (in some limiting regime) the law of the original trajectory, coarsened over the
metastable states.
\end{abstract}
\keywords{Markov chain, parallel computing, parallel replica dynamics,
quasistationary distributions, metastability}
\received{XXX}
\maketitle
\section{Introduction}
\label{s:intro}
We consider the problem of efficiently simulating time homogeneous
Markov chains with {\it metastable states}: subsets of state
space in which the Markov chain remains for a long time before leaving.
By a Markov chain we mean a {\em discrete time} stochastic
process satisfying the Markov property.
Heuristically, a set $S$ is metastable for a given Markov chain if the Markov chain
reaches local equilibrium in $S$ much faster than
it leaves $S$. We will define local equilibrium precisely below, using
{\em quasistationary distributions} (QSDs). The simulation of an exit event from
a metastable state using a naive integration technique can be very time consuming.
Metastable Markov chains arise in many contexts. The dynamics of physical systems are often modeled by
memoryless stochastic processes, including Markov chains, with
widespread applications in physics, chemistry, and
biology. In computational statistical physics (which is the main
application field we have in mind),
such models are used to understand macroscopic properties of matter,
starting from an atomistic description.
The models can be discrete or continuous in time.
The discrete in time case has particular importance:
even when the underlying model is continuous in time,
what is simulated in practice is a Markov chain obtained by
time discretization. In the context of computational statistical physics, a widely used continuous time
model is the Langevin dynamics~\cite{Lelievre:2010uu}, while a
popular class of discrete time models are the Markov State
Models~\cite{Prinz:2011id, Chodera:2007bs}.
For details, see~\cite{Schutte:2013aa,Lelievre:2010uu}.
For examples of discrete time models
not obtained from an underlying continuous time dynamics,
see~\cite{scoppola-94,bovier-2002}. In this article, we propose an
efficient algorithm for simulating metastable Markov chains over very
long time scales. Even though one of our motivations is to treat
time discretized versions of continuous time models,
we do not discuss errors in exit events due to time discretization; we refer for
example to~\cite{bouchard-geiss-gobet-2013}
and references therein for an analysis of this error.
In the physical applications above, metastability arises from the
fact that the microscopic time scale (i.e., the physical time between
two steps of the Markov chain) is much smaller than the
macroscopic time scale of interest (i.e., the physical time to observe
a transition between metastable states). Both
energetic and entropic barriers can contribute to metastability.
Energetic barriers correspond to high energy saddle
points between metastable states in the potential energy landscape, while entropic
barriers are associated with narrow pathways between metastable states; see Figure~\ref{f:metastable}.
\begin{figure}
\begin{center}
\subfigure[Energetic
Barriers]{\includegraphics[width=8cm]{Energetic.pdf}}
\subfigure[Entropic
Barriers]{\includegraphics[width=8cm]{Entropic.pdf}}
\end{center}
\caption{(a) Energetic and (b) entropic metastable states of a discrete
configuration space Markov chain. The chain jumps from one point to another
according to the following Metropolis dynamics. If $X_n = x$, a direction
(in (a), left or right; in (b), up, down, left, or right) is selected uniformly at random. If there is
a point $y$ which neighbors $x$ in this direction, then with probability
$\min\{1, e^{V(x)-V(y)}\}$ we take $X_{n+1} = y$; otherwise $X_{n+1} = x$.
Here, $V$ is a given potential energy function. On the left, each point has only
two neighbors, and the potential energy is represented on the $y$-axis. On
the right, each point has the same potential energy and between
2 and 4 neighbors.}
\label{f:metastable}
\end{figure}
Many algorithms exist for simulating metastable
stochastic processes over long time scales. One of the most
versatile such algorithms is the {\em parallel replica
dynamics} (ParRep) developed by A.F. Voter and co-workers~\cite{Voter,Voter:2002p12678}.
ParRep can be used with both energetic and entropic barriers,
and it requires no assumptions about temperature,
barrier heights, or reversibility. The algorithm was
developed to efficiently compute transitions between
metastable states of Langevin dynamics.
For a mathematical analysis of ParRep in its original
continuous time setting, see~\cite{Gideon,Tony}.
In this article, we present an algorithm which is an adaptation
of ParRep to the discrete time setting. It applies to any
Markov chain.
ParRep uses many replicas of the process, simulated in parallel
asynchronously, to rapidly find transition pathways out of
metastable states. The gain in efficiency over direct simulation comes
from distributing the computational effort across many processors,
parallelizing the problem in time. The cost is that the trajectory
becomes coarse-grained, evolving in the set of metastable states
instead of the original state space. The continuous
time version of ParRep has been successfully
used in a number of problems in materials science
(see e.g.~\cite{Uberuaga:2007ho,Perez:2010dk,Perez:2013ge,Lu:2014hl,Joshi:2013wd,Komanduri:2000df,Baker:2012he}),
allowing for atomistic resolution while also reaching extended time scales of
microseconds, $10^{-6}$ s. For reference, the microscopic time scale --
typically the period of vibration of bond lengths -- is about
$10^{-15}$ s.
In the continuous time case, consistency of the
algorithm relies on the fact that first exit times from metastable states are
exponentially distributed. Thus, if $N$ independent
identically distributed (i.i.d.) replicas have
first exit times $T_i$, $i=1,\ldots, N$, then $N \min(T_1, \ldots,
T_N)$ has the same law as $T_1$. Now if
$K = \arg\min(T_1, \ldots, T_N)$ is the first replica which leaves the
metastable state
amongst all the replicas, then the simulation clock is advanced
by $N T_K$, and this time agrees in law with the
original process. In contrast, in the
discrete time case, the exit times from metastable states are
geometrically distributed. Thus, if $\tau_i$
are now the geometrically distributed first exit times,
then $N \min(\tau_1,\ldots, \tau_N)$ does not agree in law
with $\tau_1$. A different function of the $\tau_i$ must
be found instead. This is our achievement with Algorithm~\ref{algorithm1} and
Proposition~\ref{proposition1}. Our algorithm is
based on the observation that $N[\min(\tau_1,
\ldots, \tau_N)-1] + \min[i \in \{1, \ldots ,N \} , \, \tau_i = \min(\tau_1,
\ldots, \tau_N) ]$ agrees in law with $\tau_1$.
This article is organized as follows.
In Section~\ref{QSD}, we formalize the notion of local equilibrium
using QSDs.
In Section~\ref{parrep} we present our discrete time
ParRep algorithm, and in Section~\ref{parrepmath} we study its consistency.
Examples and a discussion
follow in Section~\ref{EXAMPLE}.
\section{Quasistationary Distributions}\label{QSD}
Throughout this work, $(X_n)_{n\ge 0}$ will be a time homogeneous
Markov chain with values in a probability space $(\Omega,{\mathcal F},\PP)$.
For a random variable $X$ and probability measure $\mu$, we
write $X\sim \mu$ to indicate $X$ is distributed according to
$\mu$. For random variables $X$ and $Y$, we write $X \sim Y$ when $Y$
is a random variable with the same law as $X$.
We write $\mathbb{P}^{\mu}(X_n \in A)$ and
$\mathbb{E}^{\mu}[f(X_n)]$ to denote probabilities and expectations
for the Markov chain $(X_n)_{n\ge 0}$ starting from the indicated
initial distribution: $X_0 \sim \mu$. In the case that
$X_0 = x$, we write ${\mathbb P}^{x}(X_n \in A)$ and
$\mathbb{E}^{x}[f(X_n)]$ to denote probabilities and expectations
for the Markov chain starting from $x$.
To formulate and apply ParRep, we first need to define the
metastable subsets of $\Omega$, which
we will simply call {\em states}. The states will be used to
coarse-grain the dynamics.
\begin{definition}
Let ${\mathcal S}$ be the collection of states, which we
assume are disjoint bounded measurable subsets of~$\Omega$. We write
$S$ for a generic element of ${\mathcal S}$, and $\Pi:\Omega \to
\Omega/{\mathcal S}$ for the quotient map identifying the states.
\end{definition}
As we will be concerned with when the chain exits states,
we define the first exit time from $S$,
\begin{equation*}
\tau := \min\set{n\ge 0\,:\,X_n \notin S}.
\end{equation*}
Much of the algorithm and analysis depends on the properties of the
QSD, which we now define.
\begin{definition}
A probability measure $\nu$ with support in $S$ is a QSD if for all
measurable $A \subset S$ and all $n \in {\mathbb N}$,
\begin{equation}\label{QSD1}
\nu(A) = {\mathbb P}^\nu\left(X_n \in A\,|\,\tau > n \right).
\end{equation}
\end{definition}
Of course both $\tau$ and $\nu$ depend on $S$, but for ease of notation, we do not make
this explicit. The QSD can be seen as a local equilibrium
reached by the Markov chain, conditioned on the event that it
remains in the state. Indeed, it is easy to check that if $\nu$ is a
measure with support in $S$ such that,
\begin{equation}\label{QSD2}
\text{for any measurable $A\subset S$ and any $\mu$ with support in
$S$},\quad \nu(A) = \lim_{n\to \infty} {\mathbb P}^\mu\left(X_n \in A\,|\,\tau > n\right),
\end{equation}
then $\nu$ is the QSD, which is then unique. In Section~\ref{QSDmath},
we give sufficient conditions for existence and uniqueness of
the QSD and for the convergence~\eqref{QSD2} to occur (see
Theorem~\ref{theorem1}). We refer the reader
to
\cite{Cattiaux:2009um,del2004feynman,Martinez:1994vn,Meleard:2012vl,Tony,collet13:_quasi_station_distr}
for additional properties of the QSD.
\section{The Discrete Time ParRep Algorithm}
\label{parrep}
Using the notation of the previous section, the aim of the ParRep
algorithm is to efficiently generate a trajectory $({\hat X}_n)_{n\ge
0}$ evolving in $\Omega /{\mathcal S}$ which has, approximately, the
same law as the reference coarse-grained trajectory $(\Pi(X_n))_{n \ge 0}$.
Two of the parameters in the algorithm -- $\tcorr = \tcorr(S)$ and $\tphase =
\tphase(S)$, called the {\em decorrelation} and {\em dephasing times} --
depend on the current state $S$, but for ease of notation we
do not indicate this explicitly. See the remarks below Algorithm~\ref{algorithm1}.
\begin{algorithm}
\label{algorithm1}
Initialize a reference trajectory $X_0^{\refe} \in \Omega$. Let $N$
be a fixed number of replicas and $\tpoll$ a fixed polling time at which
the replicas resynchronize. Set the simulation clock to zero:
$\tsim = 0$. A coarse-grained trajectory $({\hat X}_n)_{n\ge 0}$
evolving in $\Omega /{\mathcal S}$ is obtained by iterating the
following: \vskip2pt
\noindent
\algorithmbox{{\bf Decorrelation Step:} Evolve the reference
trajectory $(X_n^{\refe})_{n \ge 0}$ until it spends $\tcorr$
consecutive time steps in some state $S \in {\mathcal S}$.
Then proceed to the dephasing
step. Throughout this step, the simulation clock $\tsim$ is
running and the coarse-grained trajectory is given by
\begin{equation}\label{proj1}
{\hat X}_{\tsim} = \Pi(X_{\tsim}^{\refe}).
\end{equation}}
\noindent
\algorithmbox{ {\bf Dephasing Step:} The simulation clock $\tsim$ is
now stopped and the reference and coarse-grained trajectories do
not evolve. Evolve $N$ independent replicas $\set{X_n^j}_{j=1}^N$
starting at some initial distribution with support in $S$, such
that whenever a replica leaves $S$ it is restarted at the initial
distribution. When a replica spends $\tphase$ consecutive time
steps in $S$, stop it and store its end position.
When all the replicas have stopped, reset
each replica's clock to $n=0$ and proceed to the parallel step.}
\noindent
\algorithmbox{{\bf Parallel Step:} Set $M = 1$ and iterate the
following:
\begin{enumerate}
\item Evolve all $N$ replicas $\set{X_n^j}_{j=1}^N$ from time $n =
(M-1)\tpoll$ to time $n = M\tpoll$. The simulation clock $\tsim$
is not advanced in this step.
\item If none of the replicas leaves $S$ during this time, update
$M = M+1$ and return to 1, above.
Otherwise, let $K$ be the smallest number $j$ such that $X_n^j$
leaves $S$ during this time, let $\tau^K$ be the corresponding
(first) exit time, and set
\begin{equation}
\label{e:exit_update}
\xacc = X_{\tau^K}^K,\quad \tacc = (N-1)(M-1)\tpoll +
(K-1)\tpoll + \tau^K.
\end{equation}
Update the coarse-grained trajectory by
\begin{equation}\label{proj2}
{\hat X}_n = \Pi(S) \quad \hbox{for}\quad n \in [\tsim,
\tsim+ \tacc-1],
\end{equation}
and the simulation clock by $\tsim = \tsim + \tacc$. Set
$X_{\tsim}^{\refe} = \xacc$, and return to the decorrelation
step.
\end{enumerate}
}
\end{algorithm}
The idea of the parallel step is to compute the exit time from $S$ as
the sum of the times spent by the replicas up to the first exit
observed among the replicas. More precisely, if we imagine the
replicas being ordered by their indices ($1$ through $N$), this sum is
over all $N$ replicas up to the last polling time, and then over the
first $K$ replicas in the last interval between polling times, $K$
being the smallest index of the replicas which are the first to exit.
Notice that $M$ and $\tau^K$ are such that $\tau^K \in [(M-1) \tpoll+1,
M \tpoll]$. See Figure~\ref{fig0} for a schematic of the Parallel Step.
We comment that the formula for updating the simulation time in
the parallel step of the original ParRep algorithm is simply $\tacc=N \tau^K$.
A few remarks
are in order (see \cite{Gideon,Tony} for additional comments on
the continuous time algorithm):
\begin{description}
\item[The Decorrelation Step.] In this step, the reference trajectory
is allowed to evolve until it spends a sufficiently long time in a
single state. At the termination of the decorrelation step, the
distribution of the reference trajectory should be, according
to~\eqref{QSD2}, close to that of the QSD (see
Theorem~\ref{theorem1} in Section~\ref{QSDmath}).
The evolution of the reference trajectory is {\em exact} in the
decorrelation step, and so the coarse-grained trajectory is also
exact in the decorrelation step.
\item[The Dephasing Step.] The purpose of the dephasing step is to
generate $N$ i.i.d. samples from the QSD. While we have described a simple
rejection sampling algorithm, there is another technique~\cite{Binder:aa} based on a
branching and interacting particle process sometimes called
the Fleming-Viot particle process~\cite{ferrari07:_quasi_flemin_viot}. See
\cite{Bieniek:2011jf,Bieniek:2012jg,del2004feynman,Grigorescu:2004bs,Meleard:2012vl}
for studies of this process, and~\cite{Binder:aa} for a discussion of
how the Fleming-Viot particle process may be used in ParRep.
In our rejection sampling we have flexibility on where to initialize
the replicas. One could use the position of the reference chain at
the end of the decorrelation step, or any other point in $S$.
\item[The Decorrelation and Dephasing Times.] $\tcorr$ and $\tphase$
must be sufficiently large so that the distributions of both the
reference process and the replicas are as close as possible to the
QSD, without exhausting computational resources. $\tphase$ and
$\tcorr$ play similar roles, and they both depend on the initial
distribution of the processes in $S$.
Choosing good values of these parameters is nontrivial, as they
determine the accuracy of the algorithm. In \cite{Binder:aa},
the Fleming-Viot particle process together with convergence
diagnostics are used to determine these parameters on the fly
in each state. They can also be postulated from some {\it a priori}
knowledge (e.g., barrier height between states), if available.
\item[The Polling Time.]
The purpose of the polling time $\tpoll$ is to permit for periods of
asynchronous computation of the replicas in a distributed computing
environment. For the accelerated time to be correct, it is
essential that all replicas have run for at least as long as replica
$K$. Ensuring this requires resynchronization, which occurs
at the polling time.
If communication amongst the replicas is cheap or there is little
loss of synchronization per time step, one can take $\tpoll
=1$. In this case, $M=\min\{n\,:\, \exists j \in \{1, \ldots, N\} \,s.t.\,
X_n^j \not \in S\}$ is the first exit time observed among the $N$
replicas, $K=\min\{j\,:\, X_M^j \not \in S\}$ (so $M=\tau^K$)
and $\tacc=N(\tau^K-1) +K$.
\item[Efficiency of the Algorithm.] For the algorithm to be
efficient, the states must be truly metastable: within
each state, the typical time to reach the QSD ($\tcorr$ and
$\tphase$) should be small relative to the typical exit time.
If most states are not metastable, then the exit times
will be typically smaller than the decorrelation times,
and the algorithm will rarely proceed to the dephasing
and parallel steps.
The algorithm is consistent even if some or all the states are not
metastable. Indeed, the states can be {\em any} collection of disjoint sets. However, if
these sets are not reasonably defined, it will be difficult to
obtain any gain in efficiency with ParRep. Defining the
states requires some {\em a priori} knowledge about the system.
\end{description}
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{algorithm_diagram.eps}
\end{center}
\caption{A schematic of the parallel step. The horizontal lines
represent the trajectories of replicas $1,\ldots N$ while the
crosses correspond to exit events. Index $K$ is as defined as in
Algorithm~\ref{algorithm1}. Here, $M$ cycles internal to the
parallel step have taken place. The thicker lines correspond to
the portions of the chains contributing to~$\tacc$. }
\label{fig0}
\end{figure}
\section{Mathematical Analysis of Discrete Time ParRep}
\label{parrepmath}
The main result of this section, Proposition~\ref{proposition1}, shows that
the coarse-grained trajectory simulated in ParRep is {\em exact} if
the QSD has been exactly reached in the decorrelation and dephasing
steps; see Equation~\eqref{eq:theorem3} below.
\subsection{Properties of Quasistationary Distributions}
\label{QSDmath}
Before examining ParRep, we give a condition for existence
and uniqueness of the QSD. We also state important properties
of the exit law starting from the QSD. Many of these results
can be found in~\cite{del2004feynman,collet13:_quasi_station_distr}.
We assume the following, which is sufficient
to ensure existence and uniqueness
of the QSD.
\begin{assumption}
\label{assumption1}
Let $S \in {\mathcal S}$ be any state.
\begin{enumerate}
\item For any $x\in S$, ${\mathbb P}^x(X_1 \in S) > 0.$
\item There exists $m\geq 1$ and $\delta \in (0,1)$, such that for
all $x,y\in S$ and all bounded non-negative measurable functions
$f:S\to {\mathbb R}$, $ {\mathbb E}^x\left[f(X_m)\,1_{\{\tau >
m\}}\right] \ge \delta \, {\mathbb E}^y\left[f(X_m)1_{\{\tau >
m\}}\right].$
\end{enumerate}
\end{assumption}
With this condition, the following holds (see \cite[Theorem 1]{delmoral}):
\begin{theorem}\label{theorem1}
Under Assumption~\ref{assumption1}, there exists a unique QSD $\nu$
in $S$. Furthermore, for any probability measure $\mu$ with support
in $S$ and any bounded measurable function $f:S \to {\mathbb R}$,
\begin{equation}
\label{e:qsd_convergence}
\left|{\mathbb E}^{\mu}\left[f(X_n)\,|\,\tau > n\right]
-\int_S f(x)\,\nu(dx) \right|
\le \|f\|_{\infty} \,4 \, \delta^{-1}(1-\delta^2)^{\lfloor n/m \rfloor}.
\end{equation}
\end{theorem}
Theorem~\ref{theorem1} shows that the law of $(X_n)_{n\ge 0}$, conditioned on
not exiting $S$, converges in total variation norm to the QSD $\nu$ as $n\to
\infty$. Thus, at the end of the decorrelation and dephasing steps, if
$\tcorr$ and $\tphase$ are sufficiently large,
then the law of the reference process and replicas will be close to that of the
QSD. Notice that Theorem~\ref{theorem1} provides
an explicit error bound in total variation norm.
Next we state properties of the exit law
starting from the QSD which are essential to our analysis. While these results are well-known
(see, for instance,
\cite{del2004feynman,collet13:_quasi_station_distr}), we give brief
proofs for completeness.
\begin{theorem}\label{theorem2}
If $X_0 \sim \nu$, with $\nu$ the QSD in $S$, then $\tau$ and $X_\tau$
are independent, and $\tau$ is geometrically distributed with
parameter $p = {\mathbb P}^{\nu} (X_1\notin S)$.
\end{theorem}
\begin{proof}
Let $k(x,dy)$ denote the transition kernel of $(X_n)_{n\ge 0}$. We compute
\begin{align*} {\mathbb E}^\nu\left[f(X_{\tau})\,|\,\tau = n\right]
= \frac{{\mathbb E}^\nu\left[f(X_n)\,1_{\{\tau = n\}}\right]}
{{\mathbb E}^\nu\left[1_{\{\tau = n\}}\right]} &= \frac{{\mathbb
E}^\nu\left[1_{\{\tau > n-1\}}\int_{\Omega\setminus S}
f(y)k(X_{n-1},dy)\right]}
{{\mathbb E}^\nu\left[1_{\{\tau > n-1\}}\int_{\Omega\setminus S} k(X_{n-1},dy)\right]}\\
&= \frac{{\mathbb E}^\nu\left[\int_{\Omega\setminus S}
f(y)k(X_{n-1},dy)\,\big|\,\tau > n-1\right]}
{{\mathbb E}^\nu\left[\int_{\Omega\setminus S} k(X_{n-1},dy)\,\big|\, \tau > n-1\right]} \\
&= \frac{\int_S \left(\int_{\Omega\setminus S}
f(y)k(x,dy)\right)\nu(dx)} {\int_S \left(\int_{\Omega\setminus
S} k(x,dy)\right)\nu(dx)} ={\mathbb
E}^\nu\left[f(X_{\tau})\,|\,\tau = 1\right].
\end{align*}
The second to last equality is an application of \eqref{QSD1}. As
${\mathbb E}^\nu\left[f(X_{\tau})\,|\,\tau = 1\right]$ is
independent of $n$, this establishes independence of $\tau$ and $X_\tau$.
Concerning the distribution of $\tau$, we first calculate
\begin{equation*} {\mathbb P}^\nu(\tau > n) ={\mathbb
P}^\nu\left(\tau > n\big|\tau > n-1\right) {\mathbb P}^\nu(\tau
> n-1)
\end{equation*}
and then again use \eqref{QSD1}:
\begin{align*} {\mathbb P}^\nu\left(\tau > n\big|\tau > n-1\right)=
\frac{{\mathbb E}^\nu\left[1_{\{\tau > n\}}\right]} { {\mathbb
P}^\nu(\tau > n-1)} &= \frac{{\mathbb E}^\nu\left[1_{\{\tau >
n-1\}}\int_S k(X_{n-1},dy)\right]} { {\mathbb
P}^\nu(\tau > n-1)}\\
&= {\mathbb E}^\nu\left[\int_S k(X_{n-1},dy)\,\big|\,\tau > n-1\right]\\
&= \int_S \left(\int_S k(x,dy)\right)\,\nu(dx) = {\mathbb
P}^{\nu}(X_1 \in S).
\end{align*}
Thus, ${\mathbb P}(\tau^\nu > n) = {\mathbb P}(X_1^\nu \in
S){\mathbb P}(\tau^\nu > n-1)$ and by induction, $ {\mathbb
P}^\nu(\tau > n) = \left[{\mathbb P}^\nu(X_1 \in S)\right]^n =
(1-p)^n$.
\end{proof}
\subsection{Analysis of the exit event}\label{exitevent}
We can now state and prove our main result.
We make the following idealizing assumption, which
allows us to focus on the the parallel step in
Algorithm~\ref{algorithm1}, neglecting the errors
due to imperfect sampling of the QSD.
\begin{idealization}
\label{assumption2}
Assume that:
\begin{itemize}
\item[(A1)] After spending $\tcorr$ consecutive time steps in $S$,
the process $(X_n)_{n\ge 0}$ is {\em exactly} distributed
according to the QSD~$\nu$ in $S$. In particular, at the end of
the decorrelation step, $X_{\tsim}^{\refe} \sim \nu$.
\item[(A2)] At the end of the dephasing step, all $N$ replicas are
i.i.d. with law {\em exactly} given by $\nu$. \end{itemize}
\end{idealization}
Idealization~\ref{assumption2}
is introduced in view of Theorem~\ref{theorem1},
which ensures that the QSD sampling
error from the dephasing and decorrelation steps
vanishes as $\tcorr$ and $\tphase$ become large.
Of course, for finite $\tcorr$ and $\tphase$,
there is a nonzero error;
this error will indeed propagate in time,
but it can be controlled in terms of these two
parameters. For a detailed analysis in the continuous time case,
see~\cite{Gideon,Tony}. Though the arguments
in~\cite{Gideon,Tony} could be adapted to our time discrete
setting, we do not go in this direction; instead we
focus on showing consistency of the parallel step.
Under Idealization~\ref{assumption2}, we show that ParRep is {\em exact}.
That is, the
trajectory generated by ParRep has the same probability law as the
true coarse-grained chain:
\begin{equation}~\label{eq:theorem3}
({\hat X}_n)_{n\ge 0} \sim (\Pi(X_n))_{n \ge 0}.
\end{equation}
The evolution of the ParRep coarse-grained trajectory is {exact} in the
decorrelation step. Together with Idealization~\ref{assumption2},
this means~\eqref{eq:theorem3} holds if the
parallel step is consistent (i.e. exact, if all
replicas start at i.i.d. samples of the QSD).
This is the content of the following proposition.
\begin{proposition}\label{proposition1}
Assume that the $N$ replicas at the beginning of the parallel step are
i.i.d. with law {\em exactly} given by the QSD $\nu$ in $S$ (this is
Idealization \ref{assumption2}-(A2)). Then the parallel step of
Algorithm~\ref{algorithm1} is exact:
\begin{equation*}
(\xacc, \tacc) \sim (X_\tau, \tau),
\end{equation*}
where $(\xacc,\tacc)$ is defined as in Algorithm~\ref{algorithm1},
while $(X_\tau, \tau)$ is defined for $(X_n)_{n\ge 0}$
starting at $X_0 \sim \nu$.
\end{proposition}
To prove Proposition~\ref{proposition1}, we need the following lemma:
\begin{lemma}\label{lemma1}
Let $\tau^{1}, \tau^{2},\ldots, \tau^{N}$ be i.i.d. geometric random
variables with parameter $p$: for $t \in {\mathbb N}\cup \{0\}$,
\[ {\mathbb P}(\tau^{j} > t) = (1-p)^{t}.
\]
Define
\begin{align*}
M &= \min\{m \ge 1\,:\, \exists\, j \in
\{1,\ldots,N\}\,\,\,s.t.\,\,\,\tau^{j} \le m \tpoll\},\\
K &= \min\{j \in \{1,\ldots,N\}\,:\, \tau^{j} \le M\tpoll\},\\
\xi&= (N-1)(M-1)\tpoll + (K-1)\tpoll + \tau^{K}.
\end{align*}
Then $\xi$ has the same law as $\tau^{1}$.
\end{lemma}
\begin{proof}
Notice that $\xi$ can be rewritten as
$$\xi= N(M-1)\tpoll + (K-1)\tpoll + [\tau^{K}-(M-1) \tpoll].$$
Indeed, any natural number $z$ can be uniquely expressed as
$z=N(m-1)\tpoll + (k-1) \tpoll + t$ where $m \in \mathbb N \setminus
\{0\}$, $k \in \{1,\ldots,N\}$ and $t \in \{1,2,\ldots,\tpoll\}$. For
such $m$, $k$ and $t$ we compute
\begin{align*}
&{\mathbb P}\left(\xi = N(m-1) \tpoll + (k-1) \tpoll + t\right)
={\mathbb P}\left(M=m,\, K=k, \, \tau^K - (M-1)\tpoll = t\right) \\
&= {\mathbb P}\left(\tau^{1} > m \tpoll,\, \ldots,\,\tau^{k-1}>m
\tpoll,\, \tau^{k} = (m-1) \tpoll + t,\,
\tau^{k+1} > (m-1) \tpoll, \ldots, \tau^{N} > (m-1) \tpoll\right)\\
&= \mathbb P(\tau^{1} > m \tpoll)^{k-1} {\mathbb P}\left(\tau^{k} =
(m-1) \tpoll+t\right)
\left[{\mathbb P}(\tau^{k+1} > (m-1) \tpoll)\right]^{N-k}\\
&= (1-p)^{(k-1)m\tpoll}p(1-p)^{(m-1)\tpoll+t-1}(1-p)^{(N-k)(m-1)\tpoll}\\
&= p(1-p)^{N(m-1)\tpoll + (k-1)\tpoll + t-1}= {\mathbb
P}\left(\tau^{1} = N(m-1)\tpoll + (k-1)\tpoll + t\right).
\end{align*}
\end{proof}
We can now proceed to the proof of Proposition~\ref{proposition1}.
\begin{proof}
In light of Theorem~\ref{theorem2}, it suffices to prove:
\begin{itemize}
\item[(i)] $\tacc$ is a geometric random variable with parameter $p=
{\mathbb P}^\nu(X_1 \notin S)$,
\item[(ii)] $\xacc$ and $X_{\tau}$ have the same law: $\xacc\sim X_{\tau}$, and
\item[(iii)] $\tacc$ is independent of $\xacc$,
\end{itemize}
where $(X_n)_{n\ge 0}$ is the process starting
at the $X_0 \sim \nu$.
We first prove {\em (i)}. For $j \in \{1,2,\ldots,N\}$, let
$\tau^{j}$ be a random variable representing the first exit time
from $S$ of the $j$th replica in the parallel step of ParRep, if the
replica were allowed to keep evolving indefinitely. By (A2),
$\tau^{1},\ldots,\tau^{N}$ are independent and all have the same
distribution as $\tau$.
Now by Theorem~\ref{theorem2},
$\tau^{1},\ldots,\tau^{N}$ are i.i.d. geometric random variables
with parameter $p$, so by Lemma~\ref{lemma1}, $\tacc$ is also a
geometric random variable with parameter $p$.
Now we turn to {\em (ii)} and {\em (iii)}. Note that $K = k$ if and
only if $\xacc = X_{\tau^k}^k$ and there exists $m \in {\mathbb N}$
such that $\tau^{1} > m\tpoll,\ldots, \tau^{k-1}>m\tpoll$,
$(m-1)\tpoll <\tau^{k} \le m\tpoll$, and $\tau^{k+1}
>(m-1)\tpoll,\ldots,\tau^{N}>(m-1)\tpoll$. From
Theorem~\ref{theorem2} and (A2), $X_{\tau^k}^k$ is independent of
$\tau^1,\ldots,\tau^N$, so $\xacc$ must be independent
of $K$. From this and (A2), it follows that $\xacc \sim
X_{\tau}$. To see that $\xacc$ is independent
of $\tacc$, let $\sigma(K,\tau^K)$ be the sigma algebra generated by
$K$ and $\tau^K$. Knowing the value of $K$ and $\tau^K$ is
enough to deduce the value of $\tacc$; that is, $\tacc$ is
$\sigma(K,\tau^K)$-measurable. Also, by the preceding
analysis and Theorem~\ref{theorem2}, $\xacc = X_{\tau^K}^K$ is
independent of $\sigma(K,\tau^K)$. To conclude that $\tacc$ and
$\xacc$ are independent, we compute for suitable test
functions $f$ and $g$:
\begin{align*}
{\mathbb E}[f(\tacc)g(\xacc)]&= {\mathbb E}[{\mathbb
E}[f(\tacc)g(\xacc)\,|\,\sigma(K,\tau^K)]]\\
&= {\mathbb E}[f(\tacc){\mathbb
E}[g(\xacc)\,|\,\sigma(K,\tau^K)]]= {\mathbb E}[f(\tacc)]\,
{\mathbb E}[g(\xacc)].
\end{align*}
\end{proof}
\section{Numerical Examples}\label{EXAMPLE}
In this section we consider two examples. The first illustrates
numerically the fact that the parallel step in Algorithm~\ref{algorithm1}
is consistent. The second shows typical errors resulting from
a naive application of the original ParRep algorithm to a
time discretization of Langevin dynamics. These are simple
illustrative numerical examples. For a more advanced application, we
refer to the paper~\cite{Binder:aa}, where our Algorithm~\ref{algorithm1}
was used to study the 2D Lennard-Jones cluster of seven atoms.
\subsection{One-dimensional Random Walk}
Consider a random walk on ${\mathbb Z}$ with transition probabilities
$p(i,j)$ defined as follows:
\begin{equation*}
p(i,j) = \begin{cases}
3/4,& i< 0 \hbox{ and } j=i+1,\\
1/4, & i< 0 \hbox{ and } j=i-1,\\
1/3,& i=0 \hbox{ and } |j| \le 1,\\
1/4,& i> 0 \hbox{ and } j=i+1,\\
3/4, & i> 0 \hbox{ and } j=i-1,\\
0, &\hbox{otherwise}.
\end{cases}
\end{equation*}
We use ParRep to simulate the first exit time $\tau$ of the
random walk from $S=[-5,5]$, starting from the QSD $\nu$ in $S$.
At
each point except $0$, steps towards $0$ are more likely than
steps towards the boundaries $-5$ or $5$.
We perform this simulation by using the dephasing and parallel steps
of Algorithm~\ref{algorithm1}; for sufficiently large $\tphase$, the accelerated
time $\tacc$ should have the same law as $\tau$. In this simple example
we can analytically compute the distribution of $\tau$.
We perform $10^6$ independent ParRep simulations to obtain statistics
on the distribution of $\tacc$ and the gain in ``wall clock time,''
defined below. We
find that $\tacc$ and $\tau$ have very close
probability mass functions when $\tphase = 25$; see Figure~\ref{fig1}.
To measure the gain in wall clock efficiency
using ParRep, we introduce the {parallel time} $\tpar$ -- defined,
using the notation of Algorithm~\ref{algorithm1}, by $\tpar = M\tpoll$,
where we recall $M$ is such that $\tau^K \in [(M-1)\tpoll+1,M\tpoll]$. Thus,
the wall clock time of the parallel step is $C_0 T_{par}$, with $C_0$
the computational cost of a single time step of the Markov chain for
one replica. Note in Figure~\ref{fig2}
the significant parallel time speedup in ParRep compared with the
direct sampling time. The speedup is approximately
linear in~$N$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=12cm]{fig1a.pdf}
\end{center}
\caption{Probability mass function of $\tacc$, estimated by $10^6$
ParRep simulations with $N = 10$ replicas and $\tphase = \tcorr =
25$, vs. exact distribution of $\tau$ (smooth curve). }
\label{fig1}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=12cm]{fig2b.pdf}
\end{center}
\caption{Cumulative distribution function of parallel time required
for ParRep sampling with $\tpoll = 10$ and, from top: $N = 100,
25, 10$. The bottom curve is the (analytic) cumulative
distribution function of $\tau$ (corresponding to $N=1$).}
\label{fig2}
\end{figure}
\subsection{Discretized Diffusions}
Consider the overdamped Langevin stochastic process in ${\mathbb R}^d$,
\begin{equation}\label{ovdlang}
d\tilde{X}_t = -\nabla V(\tilde{X}_t) dt + \sqrt{2\beta^{-1}} dW_t.
\end{equation}
The associated Euler-Maruyama discretization is
\begin{equation}
\label{e:em_process}
X_{n+1} = X_n - \nabla V(X_n) \Delta t + \sqrt{ 2\beta^{-1} \Delta t} \xi_n
\end{equation}
where $\xi_n \sim N(0,I)$ are $d$-dimensional i.i.d. random
variables. It is well-known~\cite{Kloeden} that $(X_n)_{n \ge 0}$ is then an
approximation of $(\tilde{X}_{n\Delta t})_{n \ge 0}$.
\subsubsection{Existence and uniqueness of the QSD}
We first show that the conditions in Assumption~\ref{assumption1} hold
(see~\cite{delmoral} for a similar example in 1D):
\begin{proposition}\label{prop1}
Assume $S \subset \R^d$ is bounded and $\nabla V$ is bounded on
$S$. Then \eqref{e:em_process} satisfies Assumption
\ref{assumption1}.
\end{proposition}
\begin{proof} First, for any $x \in S$,
\begin{equation}
\begin{split}
\mathbb{P}^x(X_1 \in S) = \E^x\bracket{1_{S}(X_1)}&=
(4\pi \beta^{-1}\Delta t)^{-d/2}\int_{\R^d} 1_S(y) \exp\set{-\frac{\abs{y -x + \nabla V(x) \Delta t}^2}{4\beta^{-1} \Delta t} }dy\\
& \geq |S|(4\pi \beta^{-1}\Delta t)^{-d/2} \min_{y\in S}\set{
\exp\set{-\frac{\abs{y -x + \nabla V(x) \Delta
t}^2}{4\beta^{-1} \Delta t} }}> 0.
\end{split}
\end{equation}
Next, for any $x,y \in S$,
\begin{equation}
\begin{split}
\E^x\bracket{f(X_1)1_{\{\tau>1\}}} &= (4\pi \beta^{-1}\Delta t)^{-d/2}\int_S f(z)
\exp\set{-\frac{\abs{z -x + \nabla V(x) \Delta
t}^2}{4
\beta^{-1} \Delta t} }dz\\
& = (4\pi \beta^{-1}\Delta t)^{-d/2}\int_S f(z) \exp\set{-\frac{\abs{z -y + \nabla
V(y) \Delta t}^2}{4
\beta^{-1} \Delta t} } \\
&\quad \times \exp\set{-\frac{\abs{z -x + \nabla V(x) \Delta
t}^2- \abs{z -y + \nabla V(y) \Delta t}^2}{4
\beta^{-1} \Delta t}}dz\\
&\geq C (4\pi \beta^{-1}\Delta t)^{-d/2}\int_S f(z) \exp\set{-\frac{\abs{z -y +
\nabla V(y) \Delta t}^2}{4
\beta^{-1} \Delta t} } dz\\
&\quad = C(4\pi \beta^{-1}\Delta t)^{-d/2} \E^y\bracket{f(X_1)1_{\{\tau>1\}}}
\end{split}
\end{equation}
where
\begin{equation*}
C = \min_{x,y,z\in S} \exp\set{-\frac{\abs{z -x + \nabla V(x)
\Delta t}^2- \abs{z -y + \nabla V(y)
\Delta t}^2}{4
\beta^{-1} \Delta t}}.
\end{equation*}
Since $S$ is bounded and terms in the brackets are bounded, $C>0$.
In Assumption~\ref{assumption1} we can then take $m=1$ and $\delta =
C(4\pi \beta^{-1}\Delta t)^{-d/2}$.
\end{proof}
Theorem~\ref{theorem1} ensures that $(X_n)_{n\ge 0}$ converges
to a unique QSD in $S$, with a precise error estimate in terms of
the parameters $m$ and $\delta$ obtained in the proof
of Proposition~\ref{prop1} . This error estimate is certainly not
sharp; better estimates can be obtained by studying the spectral
properties of the Markov kernel. We refer to~\cite{Tony} for such convergence results in the continuous time case~\eqref{ovdlang}.
\subsubsection{Numerical example}\label{sec:diff1d}
Here we consider the 1D process
\begin{equation}
\label{e:per1d}
d\tilde{X}_t = - 2\pi \sin (\pi \tilde{X}_t) dt + \sqrt{2}dW_t,
\end{equation}
discretized with $\Delta t = 10^{-2}$. We compute
the first exit time from $S = (-1,1)$, starting at ${\tilde X}_0 = 1/2$.
We use Algorithm~\ref{algorithm1} with
$\tcorr = \tphase = 100$, corresponding to the
physical time scale $\tcorr \Delta t = \tphase \Delta t = 1$,
and $N=1000$ replicas.
Consider a direct implementation of the continuous time ParRep
algorithm into the time discretized
process. In that algorithm, the accelerated time is (in units of
physical time instead of time steps)
\begin{equation}
\label{e:tacc_naive}
\tacc^{\rm continuous} = N \tau^K \Delta t,
\end{equation}
with $\tau^K$ the same as in Algorithm~\ref{algorithm1} above. As
$\tacc^{\rm continuous}$ is by construction a multiple of $N \Delta t = 10$,
a staircasing effect can be seen in the exit time distribution; see
Figure~\ref{f:per1d}. This staggering worsens as the number of
replicas increases. In our Algorithm~\ref{algorithm1}, we use the accelerated
time formula (again in units of physical time)
\begin{equation*}
\tacc^{\rm corrected} = \tacc \Delta t.
\end{equation*}
We find excellent agreement between
the serial data -- that is, the data obtained from direct numerical
simulation -- and the data obtained from Algorithm~\ref{algorithm1}.
See Figure~\ref{f:per1d}. (The agreement is perfect in the
decorrelation step; see Figure~\ref{f:per1d_zoom}.) We comment further on this in the next section.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{clopper_log.pdf}
\end{center}
\caption{Exit time distributions for the Euler-Maruyama
discretization of \eqref{e:per1d}. Here $T$ represents
the first exit time from $S = (-1,1)$, starting at $1/2$.
There is excellent agreement between the serial, unaccelerated simulation
data ($T = \tau^\nu \Delta t$) and our ParRep algorithm ($T =
\tacc^{\rm corrected}$), while the original ParRep
formula ($T = \tacc^{\rm continuous}$)
deviates significantly. Dotted lines represent
95\% Clopper-Pearson confidence intervals obtained from
$10^6$ independent simulations; confidence interval widths increase in $t$ as
fewer samples are available.}
\label{f:per1d}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{clopper_zoom.pdf}
\end{center}
\caption{A zoomed-in version of Figure~\ref{f:per1d},
highlighting the decorrelation step (recall $\tcorr \Delta t = 1$).
Serial
simulation, our ParRep algorithm, and the original ParRep
algorithm all produce identical data.
This comes from the fact that
serial and ParRep simulations are identical in law
during the decorrelation step. }
\label{f:per1d_zoom}
\end{figure}
\subsubsection{Discussion}
In light of the discretization example, one may ask what kind of
errors were introduced in previous numerical studies which used ParRep
with \eqref{e:tacc_naive}. Taking $\tpoll =1$ for simplicity, we
calculate
\begin{equation*}
\E\bracket{\abs{\tacc^{\rm corrected} - \tacc^{\rm continuous}}} = \E\bracket{\abs{ (N (\tau^K-1) + K)\Delta t - N
\tau^{K}\Delta t}}= \Delta t\,\E\bracket{\abs{N-K}}= \Delta t\sum_{k=1}^N (N-k) \PP(K=k).
\end{equation*}
Using calculations analogous to those used to study $\tacc$, it can be
shown that
\[
\PP(K=k) = \frac{(1-p)^{k-1}p}{1-(1-p)^N}.
\]
Therefore the error in the number of time steps per parallel step is
\begin{equation}
\text{Absolute Error} = \frac{N\Delta t}{1-(1-p)^N} - \frac{\Delta t}{p},\quad \text{Relative Error} = \frac{pN}{1-(1-p)^N} -1.
\end{equation}
Consider the relative error, writing it as
\begin{equation*}
pN \bracket{\frac{1}{1-r^N}- \frac{1}{(1-r)N}}, \text{ where } r=1-p.
\end{equation*}
We claim the quantity in the brackets,
\begin{equation}
\label{e:relerr_prefactor}
f(r,N):= {\frac{1}{1-r^N}- \frac{1}{(1-r)N}} = \frac{r^N -N r + N-1}{Nr^{N+1} -
N r^N - Nr +N},
\end{equation}
is bounded from above by one. Indeed, for any $0<r<1$, we immediately
see that $f(r,N)$ is zero at $N=1$ and one as $N\to \infty$. Let us
reason by contradiction and assume that $\sup_{r \in (0,1), N>0}
f(r,N) > 1$. Since $f$ is continuous in $N>0$ and $0<r<1$, there is
then a point $(r,N)$ such that $f(r,N)=1$; thus
\[
g_N(r) =0, \text{ where } g_N(r):= N r^{N+1} - (N+1)r^N +1.
\]
Note that $g_N(0) = 1$ and $g_N(1) = 0$ for all values of $N$.
Computing the derivative with respect to $r$, we observe
\[
g_N'(r) = -N(N+1)(1-r)r^{N-1}<0.
\]
Therefore, $g_N(r)$ is decreasing, from one at $r=0$ to zero at $r=1$,
in the interval $(0,1)$. Hence, $g_N(r) =0$ has no
solution, contradiction. We conclude that~\eqref{e:relerr_prefactor} is bounded from above by one.
Consequently, we are assured
\begin{equation}
\text{Absolute Error} \leq N \Delta t, \quad \text{Relative Error} \leq pN.
\end{equation}
Thus, so long as $pN \ll 1$, the relative error using the
accelerated time $\tacc^{\rm continuous}$ will be modest, especially for very metastable
states where $p\ll 1$. If also $N \Delta t \ll 1$, then the absolute
error will be small.
The
above calculations are generic. Though our discretized diffusion
example in Section~\ref{sec:diff1d} is a simple 1D problem, the errors
displayed in Figure
\ref{f:per1d} are expected whenever the continuous time ParRep
rule~\eqref{e:tacc_naive} is used for a time discretized process.
Though this error (as we showed above) will be small provided $Np \ll 1$ and
$N\Delta t \ll 1$, our Algorithm~\ref{algorithm1}
has the advantage of being consistent for any $\Delta t$,
including relatively large values of $N\Delta t$.
\section*{Acknowledgments}
We would like to thank the anonymous referees for their many
constructive remarks.
The work of {\sc D. Aristoff} and {\sc G. Simpson} was supported in
part by DOE Award DE-SC0002085. {\sc G. Simpson} was also supported
by the NST PIRE Grant OISE-0967140. The work of {\sc T. Leli\`evre} is
supported by the European Research Council under the European Union's
Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement
number 614492.
\bibliographystyle{plain} | {"config": "arxiv", "file": "1401.4500/parrep_rev4.tex"} |
TITLE: Does the Kimberling sequence map numbers "arbitrarily far away"?
QUESTION [1 upvotes]: The Kimberling sequence is a recursively defined "shuffling sequence" (pictorial description here). Let $k:\mathbb{N}\to \mathbb{N}$ be the Kimberling sequence. Does $k$ map members of $\mathbb{N}$ arbitrarily far away, or more formally: given $N\in\mathbb{N}$ is there $m\in\mathbb{N}$ such that $|k(m)-m|>N$?
REPLY [3 votes]: The answer is yes. Indeed, as noted at A007063,
$$k(\theta_j)=3\theta_j-(j+1),
$$
where
$$\theta_j:=\sum_{i=0}^{j-1}2^{\lfloor i/3\rfloor}\ge2^{\lfloor(j-1)/3\rfloor}.
$$
So,
$$k(\theta_j)-\theta_j=2\theta_j-(j+1)\underset{j\to\infty}\longrightarrow\infty,
$$
as desired. | {"set_name": "stack_exchange", "score": 1, "question_id": 344493} |
TITLE: finding simple probabilities of a dice throw
QUESTION [0 upvotes]: I am not sure how to solve this question. Would appreciate your help:
$4$ normal dice are thrown:
a) If we obtain at least $2$ even results, what are the chances that between the $4$ results, there's at least one result who is equal to 6?
b) If $4$ even (not odd) results were obtained, what is the probability of them being larger than 3?
My attempt:
For one die: $s=\{1,2,3,4,5,6\}, e=\{2,4,6\}$. Since we're talking about $4$ dice, then $|s_1|=|s|^4=6^4=204$ and $|e_1|=|e|^4=3^4=81$
so for a), we'll define $|e_2|=|e|^2=9$ (we need $2$ even results at least). the odds of obtaining at least one six from $4$ dices is $\frac{1}{24}$, so to calculate it I need to sum $\frac{9}{204}+\frac{1}{24}$ as in the addition of two events? or to divide?
b) odds of obtaining $4$ even results are $\frac{|e_1|}{|s_1|}=\frac{27}{68}$, and to assure that all of the results are larger than 3 $e_3=\{4,5,6\}$ and we need $|e_3|^4=81$ (again) divided by $|s^4|$.
I'm doing something wrong here, can someone please show me how to solve it correctly? also would appreciate learning the correct notation and writing so I can learn to make it correct and more aesthetic.
Thank you very much.
REPLY [1 votes]: Four normal dice are thrown. If we obtain at least two even results, what are the chances that among the four results, there is at least one $6$?
Our sample space consists of those cases in which $2$, $3$, or $4$ of the results are even.
Each die has probability $1/2$ of showing an even result. The probability that exactly $k$ of the four dice will show an even result is given by the Binomial distribution.
$$\Pr(X = k) = \binom{4}{k}\left(\frac{1}{2}\right)^4$$
Clearly, we cannot obtain a 6 on a die that shows an odd result. If a die shows an even result, the probability of obtaining a 6 is $1/3$ since a 2, 4, or 6 is equally likely to appear. Therefore, the probability of not obtaining a 6 on a die that shows an even result is $1 - 1/3 = 2/3$. The probability of not obtaining a 6 on $k$ dice that each show an even result is $(2/3)^k$, which means the probability of obtaining at least one 6 on $k$ dice that each show an even result is
$$1 - \left(\frac{2}{3}\right)^k$$
Let $S$ be the event that at least one six appears. Let $X = k$ be the event that exactly $k$ of the dice show an even result. Then the probability that at least one six appears if at least two dice show an even result is
\begin{align*}
\Pr(S \mid X \geq 2) & = \frac{\Pr(S \cap X \geq 2)}{\Pr(X \geq 2)}\\
& = \frac{\Pr(S \cap X = 2) + \Pr(S \cap X = 3) + \Pr(X \cap X = 4)}{\Pr(X = 2) + \Pr(X = 3) + \Pr(X = 4)}\\
& = \frac{\Pr(S \mid X = 2)\Pr(X = 2) + \Pr(S \mid X = 3)\Pr(X = 3) + \Pr(S \mid X = 4)\Pr(X = 4)}{\binom{4}{2}\left(\frac{1}{2}\right)^4 + \binom{4}{3}\left(\frac{1}{2}\right)^4 + \binom{4}{4}\left(\frac{1}{2}\right)^4}\\
& = \frac{\left[1 - \left(\frac{2}{3}\right)^2\right]\binom{4}{2}\left(\frac{1}{2}\right)^4 + \left[1 - \left(\frac{2}{3}\right)^3\right]\binom{4}{3}\left(\frac{1}{2}\right)^4 + \left[1 - \left(\frac{2}{3}\right)^4\right]\binom{4}{4}\left(\frac{1}{2}\right)^4}{\binom{4}{2}\left(\frac{1}{2}\right)^4 + \binom{4}{3}\left(\frac{1}{2}\right)^4 + \binom{4}{4}\left(\frac{1}{2}\right)^4}
\end{align*}
If four even results are obtained, what is the probability of them being greater than 3?
Since each die shows an even result, the possible results on a given die are 2, 4, or 6. Of these three equally likely events, two are greater than 3. Hence, the probability that a die showing an even result exhibits a result greater than 3 is $2/3$. The probability that all four dice will show a result greater than 3 result given that each die shows an even result is
$$\left(\frac{2}{3}\right)^4$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2759347} |
TITLE: Does Lipschitz-kind "non-scalar ODEs" and PDEs could stand having finite-duration solutions?
QUESTION [7 upvotes]: Does Lipschitz-kind "non-scalar ODEs" and PDEs could stand having finite-duration solutions?
Intro
Recently I have found on these papers by Vardia T. Haimo (1985) Finite Time Controllers and Finite Time Differential Equations that there exists solutions of finite duration to differential equations, defining them here as:
Definition 1 - Solutions of finite-duration: the solution $y(t)$ becomes exactly zero at a finite time $T<\infty$ by its own dynamics and stays there forever after $(t\geq T\Rightarrow y(t)=0)$. So, they are different of just a piecewise section made by any arbitrarily function multiplied by rectangular function: it must solve the differential equation in the whole domain. (Here I just pick the shorter name its look more natural to me for several I found: "finite-time", "finite-time-convergence", "finite-duration", "time-limited", "compact-supported time", "finite ending time", "singular solutions", "finite extinction time", among others).
The mentioned papers refers only to scalar autonomous ODEs of 1st and 2nd order, assuming that $T=0$ and that the right-hand-side of the ODE is at least class $C^1(\mathbb{R}\setminus\{0\})$, and with this it is mentioned that:
"One notices immediately that finite time differential equations cannot be Lipschitz at the origin. As all solutions reach zero in finite time, there is non-uniqueness of solutions through zero in backwards time. This, of course, violates the uniqueness condition for solutions of Lipschitz differential equations."
So, NO Lipschitz 1st or 2nd order scalar autonomous ODEs could stand solutions of finite duration, so classic linear models are discarded. Also are discarded classical solutions through Power Series which are analytical in the whole domain, since there is a non-zero measure compact set of points identical to zero, the only analytical function that could support this is the zero function, due the Identity Theorem.
This discard every example of solutions I saw in engineering, but fortunately through these questions here, here, and here, I have been able with the help of other users to find some examples, where I would like to share both of them (here, $\theta(t)$ is the Heaviside step function):
This that is similar to the examples of the mentioned papers, where unfortunately no exact solution is given: $$ \dot{x}=-\text{sgn}(x)\sqrt{|x|},\,x(0)=1\quad\rightarrow\quad x(t)=\frac{1}{4}(2-t)^2\theta(2-t) \equiv \frac{1}{4}\left(1-\frac{t}{2}+\left|1-\frac{t}{2}\right|\right)^2$$
Remembering now that uniqueness of solutions is not granted, focusing in positive real-valued solutions for a positive real-valued ending time $0<T<\infty$, and assuming a parameter $n>1$, the following family of solutions could be obtained: $$\dot{x} = -\sqrt[n]{x},\,x(0)>0,\,T>0\quad\rightarrow\quad x(t) = \left[\frac{n-1}{n}\left(T-t\right)\right]^{\frac{n}{n-1}}\theta(T-t) \equiv x(0) \left[ \frac{1}{2} \left( 1-\frac{t}{T} + \left| 1-\frac{t}{T} \right| \right)\right]^{\frac{n}{n-1}}$$
As can be seen, are nothing exotic, just a polynomial piecewise stitch with the zero function but in just a way the whole composition solves the differential equation in the whole domain.
Main question
Since no 1st and 2nd order scalar autonomous Lipschitz ODE could have a solution with a finite ending time, but Non-Lipschitz alternatives could through solutions that aren't necessarily unachievable in close form, I would like to know if Lipzchitz non-scalar ODEs and PDEs could have these kind of finite duration solutions, or, if conversely, it is also required to them of having at least a Non-Lipschitz point in time (maybe the extra dimensions made the "non-Lipschitz characteristic" not required - is what I wont to figure out).
Here I give emphasis to the time variable, since I already know that PDEs could have compact-supported solutions on the space variables, as is explained on this answer to other question I did... before, with scalar ODEs, there is no ambiguity, but for PDEs now I explicitly focus the question to the time variable (so that explains the "finite-duration", and not just compact-supported as I did on the mentioned question).
It is also the issue about treating the time variable as in a spacetime scheme $\mathbb{R}^n$, or in a classic parametrization scheme $\mathbb{R}^{n+1}$ which is mentioned in the cited answer (here I become a bit lost, so I don't going to give any restriction to see what you could tell me about).
What I believe
After founding this video where is explained that some PDEs could show finite-time blow-ups behavior, I start to wonder if some of this singularities could be due are the reciprocal, or have in their denominator, something is behaving as having a finite duration solution: I made that question here founding that the family of solutions of point $(2)$ indeed have reciprocals that will show a differential equation that behave as having finite-time blow-ups, but also in the comments the user [@CalvinKhor] give an example of a system that shows a finite-time blow-up behavior which reciprocal doesn't lead to a differential equation that admits finite-duration solutions.
So far, I have made a question here about the Euler's Disk toy which I think is an example of a system that is having a finite duration solutions since it sounds ends in finite time (and some angles are showing a finite-time blow-up behavior), but their motion equations, at least on this paper (eqs. 23, 24, and 25) are framed as a system of nonlinear ODEs (where I believe are Lipschitz, but I am not 100% sure), so I believe these kind of system exists. But so far I did not find any example for PDEs.
Added later - some attempts
So far everything I have tried have ending to be something I don't really know if it well defined (I made a related question here), but they are surely reaching zero in finite time so their phase-space should not been fully covered. Hope you can comment if for a Lipschitz kind of differential equation it is a required to cover or not their whole phase-space, because I am not sure if having a finite duration solution will go against it.
2nd Added Later
On the mentioned question here, through the comments and answers I become confident that the differential equation require at least on Singular point in time where the equation is Non-Lipschitz so uniqueness of solutions could be broken, which is required for the solution to become zero after a finite ending time on a non-zero measure domain of points, which act like stitching a piecewise section of the trivial zero function to the previous values of the solution.
As the example, generalizing something I use on the mentioned question, I found that If I pick a function $P(x-t)$ as the solutions for the wave equation $P_{xx}=P_{tt}$, then it is I made a function:
$$U(x,t)=P(x-t)\cdot\left[\frac{(n-1)}{n}(T-t)\right]^{\frac{n}{(n-1)}}$$
then it will solve the PDE:
$$U_{xx}-U_{tt}-\frac{2n}{(n-1)}\frac{U_{t}}{(T-t)}-\frac{n(2n-1)}{(n-1)^2}\frac{U}{(T-t)^2}=0$$
Then, since (i) the trivial solution $U(x,t)=0$ solves the differential equation, and (ii) the equation is Non-Lipschitz a the point $t=T$ fulfilling that $U(x,T)=U_t(x,T)=0$ for $n>1$, also the finite duration solution:
$$U^*(x,t)=P(x-t)\cdot\left[\frac{(n-1)}{n}(T-t)\right]^{\frac{n}{(n-1)}}\theta(T-t)$$
will be solving the same differential equation for $U(x,t)$ on the whole time domain $t \in \mathbb{R}$, and also, note that uniqueness is not longer hold since both $U$ and $U^*$ are valid solutions, at least I tested correctly for $n=2$ and $P(x-t)=\exp\left(1-\frac{1}{(1-(x-t)^2)}\right)$, and also for $U^*(x,t)=\left(1-(x-t)^2+|1-(x-t)^2|\right)^4\cdot\left(1-t+|1-t|\right)^2$ (plot here).
But against the intuition developed on the mentioned question, on this another question about physics of finite duration phenomena, an user named @ConnorBehan tells that there exists PDEs named fast diffusion equations where finite extinction times are achieved, so looking for this new term I found, indeed, appeared bunch of PDEs where is claimed finite ending times are achieved, like $\frac{d^2}{dx^2}\left(\sqrt{u}\right)=u_t$, but what is important to notice there, is that in every paper is claimed that uniqueness of solutions is hold at the same time is achieving a finite extinction time, which contradicts what is done so far on the other question.
So now I am totally lost again, and the papers are too abstract for my current knowledge, but what I believe it could be happening are:
The equation aren't Lipschitz and the claims of uniqueness are being done only within the domain where the solutions still don't achieve the extinction time?
Somehow uniqueness and Lipschitz-ness is holds on PDEs at the same time they can become zero forever after a finite time? (somehow against Picard-Lindelöf theorem which require the equation to be Lipschitz to hold the uniqueness part of it)
Or they are just taking a piecewise section of a solution that becomes zero on a point in time and assuming is zero outside a domain they don't care about? So don't fulfilling the definition of solution of finite duration I am using here, because their solution don't solves the PDE after the extinction time (maybe is useful on their specific contexts).
Maybe is other the situation and I don't getting the full idea of the papers.
Here I will list some of the papers I found:
"Extinction time for some nonlinear heat equations" - Louis A. Assalé, Théodore K. Boni, Diabate Nabongo
"Stability of the separable solution for fast diffusion" - James G. Berryman & Charles J. Holland
"Degenerate parabolic equations with general nonlinearities" - Robert Kersner
"Nonlinear Heat Conduction with Absorption: Space Localization and Extinction in Finite Time" - Robert Kersner
"Fast diffusion flow on manifolds of nonpositive curvature" - M. Bonforte, G. Grillo, J. Vázquez
"Finite extinction time for a class of non-linear parabolic equations" - Gregorio Diaz & Ildefonso Diaz
"Finite Time Extinction by Nonlinear Damping for the Schrödinger Equation" - Rémi Carles, Clément Gallo
"Classification of extinction profiles for a one-dimensional diffusive Hamilton–Jacobi equation with critical absorption" - Razvan Gabriel Iagar, Philippe Laurençot
Since are papers that works with frameworks that are similar to relativity or quantum mechanics, if they are really finite duration solutions in the context of this question they will be solving also this other questions I made here and here, but the issue of being Lipschitz or not and holding uniqueness of solutions made me think maybe they are not fulfilling the definition at the beginning of this question.
Hope you can help me what is going on on this examples.
REPLY [1 votes]: Let's recall precisely Picard Lindelöf theorem where I pick up here the definition used in Wikipedia:
Let $D\subseteq \mathbb {R} \times \mathbb {R} ^{n}$ be a closed rectangle with $(t_{0},y_{0})\in D$. Let $f:D\to \mathbb {R} ^{n}$ be a function that is continuous in $t$ and Lipschitz continuous in $y$. Then, there exists some $\varepsilon \gt 0$ such that the initial value problem - IVP
$$\begin{cases}y^\prime(t)=f(t,y(t))\\
y(t_0)=y_0\end{cases}$$
has a unique solution $y(t)$ on the interval $[t_{0}-\varepsilon ,t_{0}+\varepsilon ]$
So, Picard Lindelöf theorem deals with IVP, and not with "Lipschitz-kind "non-scalar ODEs"
Let's consider the following IVP example
$$\begin{cases}y^\prime(t)=2 \sqrt{\lvert y(t) \rvert}\\
y(0)=y_0\end{cases}$$
If you take $y_0 \neq 0$, the map $(t,y) \mapsto 2 \sqrt{\lvert y \rvert}$ is locally Lipschitz around $y_0$ in $y$ and the IVP has a unique solution that indeed has a finite extinction time.
This is an example similar to "Extinction time for some nonlinear heat equations" - Louis A. Assalé, Théodore K. Boni, Diabate Nabongo", paragraph 2 with $p=1/2$.
The apparent contradiction is solved by noting:
That at times $t_1$ where $y(t_1)=0$, the map $(t,y) \mapsto 2 \sqrt{\lvert y \rvert}$ is not locally Lipschitz around that point.
And indeed, at that point, the IVP has several solutions (an infinity in fact), that are the maps $$y_a(t) = \begin{cases} 0 & t < a \\
(t-a)^2 & t \ge a
\end{cases}
$$ where $a \gt t_1$ and the always vanishing map.
If the map $(t,y) \mapsto f(t,y)$ is locally Lipschitz in $y$ at all points of its domain, then finite extension time solutions won't appear. | {"set_name": "stack_exchange", "score": 7, "question_id": 4433223} |
\begin{document}
\maketitle
\begin{abstract}
We consider several classes of intersection graphs of line segments in the plane and prove new equality and separation results between those classes.
In particular, we show that:
\begin{itemize}
\item intersection graphs of grounded segments and intersection graphs of downward rays form the same graph class,
\item not every intersection graph of rays is an intersection graph of downward rays, and
\item not every intersection graph of rays is an outer segment graph.
\end{itemize}
The first result answers an open problem posed by Cabello and Jej\v{c}i\v{c}. The third result confirms a conjecture by Cabello.
We thereby completely elucidate the remaining open questions on the containment relations between these classes of segment graphs.
We further characterize the complexity of the recognition problems for the classes of outer segment, grounded segment, and ray intersection graphs.
We prove that these recognition problems are complete for the existential theory of the reals.
This holds even if a 1-string realization is given as additional input.
\end{abstract}
\sloppy
\input{introduction}
\input{preliminaries}
\input{circle}
\input{stretchability}
\input{RaysandSegments}
\section*{Acknowledgments}
This work was initiated during the Order \& Geometry Workshop organized by Piotr Micek and the second author at the Gułtowy Palace near Poznań, Poland, on September 14-17, 2016. We thank the organizers and attendees, who contributed to an excellent work atmosphere. Some of the problems tackled in this paper were brought to our attention during the workshop by Michał Lasoń. The first author also thanks Sergio Cabello for insightful discussions on these topics.
\bibliographystyle{abbrv}
\bibliography{main}
\end{document} | {"config": "arxiv", "file": "1612.03638/main.tex"} |
TITLE: Nonpertubative renormalization in quantum field theory versus statistical physics
QUESTION [5 upvotes]: I am trying to work my head around how renormalization works for quantum field theory. Most treatments cover perturbative renormalization theory and I am fine with this approach. But it is not the most general framework and is not intuitively related to the Wilsonian approach. I am also a bit lost with respect to the meaning of the associated notions in the QFT context like the anomalous dimension. So, I essence what I am asking what is the intuitive picture relating the coarse graining idea of statistical physics with the framework of quantum field theory. Also, how could one picture/understand the notion of proliferation, coupling constants, beta functions, fixed points, anomalous dimension and conformal invariance with respect to the way the ideas are interpreted in statistical physics. I find it easy to visualize the entire procedure in the latter context but not in the former. A complete analogy might not be possible but I am only asking to what extent it can be achieved.
REPLY [2 votes]: As far as I understand it, a basic renormalization step consisting of
Coarse graining (average or integrate out high energy degrees of
freedom)
recalculate the appropriate quantity defining the
effective theory which describes the system at a certain scale
rescale different quantities as needed
works in the same way for both, statistical mechanics and quantum field theoretic systems. The difference between them I have seen so far, is that for statistical mechanics one uses the partition function or the Hamiltonian to describe the effective theory, whereas in QFT the action or the Lagrangian is used. In both cases, considering an infinitesimal renormalization transformation, different renormalization group equations (or $\beta$ functions for specific coupling constants can be derived, and investigations of the renormalization group flow to find and characterize fixed points etc are done in a very similar spirit.
Concerning nonperturbative renormalization, a method which can do this (I dont know what other noperturbative renormalization methodes exist if any) is the Exact Renormalization Group (ERG), sometimes also called functional renormalization group, allows for this. A nice introduction to and overview of the ERG is given in this tutorial by Oliver J. Rosten, describes things (after reviewing block spin models in statistical mechanics) from a QFT point of view.
You ask about quite a number of different specific issues, as far as I have seen already, many things like beta functions, different fixed points, anomalous dimensions, etc are covered in Roston's text. | {"set_name": "stack_exchange", "score": 5, "question_id": 52175} |
TITLE: Why is the identity component of a matrix group a subgroup?
QUESTION [3 upvotes]: I'm working through Stillwell's "Naive Lie Theory". I'm supposed to show that the identity component of a matrix group is a subgroup in two steps. I'm allowed to assume that "matrix multiplication is a continuous operation". First question- what does this mean? Does this mean multiplying matrices by a fixed matrix is continuous, or multiplying two matrices which vary?
In the first step, I'm supposed to prove that if there are continuous paths in the group $G$ from 1 to $A \in G$ and to $B \in G$ then there is a path in G from $A$ to $AB$.
I did this by assuming that matrix multiplication by a fixed matrix was continuous. I presume that this will get us closure under group operation by concatenating the path from 1 to $A$ with the path from $A$ to $AB$.
Second, and where I am stuck, is in proving that if there is a continuous path in $G$ from 1 to $A$ there is also a continuous path from $A^{-1}$ to 1. If I knew that the map that sends $A$ to $A^{-1}$ was continuous, I think I would be done, but I don't know how to get this easily.
REPLY [3 votes]: Multiplying two matrices. That is, $\times : G \times G \to G$ is continuous.
You don't need to know that inversion is continuous (although it is by Cramer's rule). You just need to multiply the path by $A^{-1}$.
REPLY [0 votes]: Hint. The path component and the connected component of this group containing the identity are one in the same. | {"set_name": "stack_exchange", "score": 3, "question_id": 38373} |
TITLE: Question on geometry triangle and incenter
QUESTION [1 upvotes]: Let ABC be a triangle. Let B' and C' denotes respectively the reflection of B and C in the internal angle bisectors of angle A. How do I prove that the triangles ABC and AB'C' have same incenter.
REPLY [0 votes]: Hint. The incentre of AB'C' is obtained by reflecting the incentre of ABC about the bisector.
REPLY [0 votes]: Triangles $ABC$ and $AB'C'$ are symmetric over the angle bisector of $A$ and hence so do their incenters. Also since the incenters lies on the bisector they must be the same point. | {"set_name": "stack_exchange", "score": 1, "question_id": 1568973} |
TITLE: Constructing a "sheaf of vector fields" for a flasque sheaf of $k$-algebras
QUESTION [0 upvotes]: Let $k$ be a field. We require all algebras to be associative and commutative. Unital algebra morphisms are required to preserve the multiplicative identity.
Let $\mathcal{O}$ be a sheaf of unital $k$-algebras on a topological space $X$.
Assume $\mathcal{O}$ is flasque/flabby.
Definition: Given open $U\subset X$ and $D : \mathcal{O}(U)\rightarrow \mathcal{O}(U)$
a $k$-linear derivation,
we say $D$ is "well-behaved" if:
for any $f,g \in U$
and any open $V \subset U$,
if $f|_V = g|_V$
then $D(f)|_V = D(g)|_V$.
Now if we define
$$
\mathcal{F}(U) = \{ D \in \mathrm{Hom}_{k\text{-Vect}}(\mathcal{O}(U),\mathcal{O}(U))\,|\, D\text{ is well-behaved derivation} \}
$$
this should produce a sheaf of $k$-Lie algebras, with Lie bracket being the object-wise commutator.
The restrictions are defined for $U \supset V$ by $D|_V(h) = D(\tilde{h})|_V$
where $\tilde{h} \in \mathcal{O}(U)$ is chosen such that $\tilde{h}|_V = h$;
such $\tilde{h}$ exists since $\mathcal{O}$ is flasque;
and $D(\tilde{h})|_V$ is independent of the choice of lift $\tilde{h}$ because $D\in\mathcal{F}(U)$ is well-behaved.
My questions are:
Are the above constructions and assertions valid?
Can we view $\mathcal{F}$ as a "sheaf of vector fields" on $X$?
For example, if $X$ is a smooth manifold with $\mathcal{O}$ the sheaf of smooth real functions, then the above should reproduce the sheaf of sections of $TX$. (The "well-behaved" condition should always hold in this case?)
REPLY [1 votes]: Your definition does produce a valid sheaf. There is another way of doing this which is conceptually more elegant. $\DeclareMathOperator{Hom}{Hom}$
Consider the sheaf $G(U) = \{D \in \Hom(\mathcal{O}|_U, \mathcal{O}|_U) \mid $ for each $V \subseteq U$, $D(V)$ is a $k$-linear derivation$\}$. In the internal logic of sheaves, we are constructing the “set” $G = \{D : \mathcal{O} \to \mathcal{O} \mid D$ is a $k$-linear derivation$\}$, which of course can become a Lie algebra.
Note that $G$ can be constructed regardless of whether $\mathcal{O}$ is flasque. However, when $\mathcal{O}$ is flasque, we see that $G$ and $\mathcal{F}$ are isomorphic.
For consider the natural transformation $\theta : G \to \mathcal{F}$ defined by $\theta_U(D) = D(U)$. It is easy to see that $\theta$ is well-defined; if we have $D \in G(U)$, then we see that for all $f, g \in \mathcal{O}(U)$, for all $V \subseteq U$, if $f|_V = g|_{U}$, then $D(U)(f)|_{V} = D(V)(f|_V) = D(V)(g|_V) = D(U)(g)|_V$, so $D(U)$ is “well-behaved”. Now suppose we have $V \subseteq U$, $D \in G(U)$, and $f \in \mathcal{O}(V)$. Take some $f’ \in \mathcal{O}(U)$ such that $f’|_V = f$; then $\theta_U(D)|_V(f) = \theta_U(D)(f’)|_V = D(U)(f’)|_V = D(V)(f’|_V) = D(V)(f) = D|_V(V)(f) = \theta_V(D|_V)(f)$. This confirms the naturality of $\theta$.
Now I claim that $\theta$ is an isomorphism. To do this, we explicitly construct the inverse of $\theta_U$. Given $D \in \mathcal{F}(U)$, define $\eta(D)$ to be the natural transformation given by $\eta(D)(V) = D|_V$. This is a natural transformation by the definition of the restriction operator. We see immediately that $\theta_U \circ \eta$ and $\eta \circ \theta_U$ are both the identity.
So we see that $\mathcal{F}$ is just another way of constructing $G$ if we add an extra assumption - that $\mathcal{O}$ is flasque.
Now with our $G$, we can easily define the Lie bracket in the way you would expect. Given $D, E \in G(U)$, we can define $[D, E](V) = [D(V), E(V)]$ (using the ordinary Lie bracket structure on derivations). It is easy to verify the equations of Lie algebra hold here; they follow from the corresponding set-theoretic facts.
As for your second question, I’ll have to think on it a bit more. | {"set_name": "stack_exchange", "score": 0, "question_id": 4478496} |
TITLE: Change of variables in differential equation?
QUESTION [1 upvotes]: I have the following formula:
$$f(x) = \frac{d^2w(x)}{dx^2}$$
Now I would like to normalize $x$ by dividing it by $L$? This would be the substitution: $$\hat{x}=x/L$$
How would my formula change? (step by step please)
REPLY [1 votes]: Using $x = L \hat{x}$ we have $dx = L d\hat{x}$ and $dx^2 = L^2 d\hat{x}^2$. Therefore,
$$
\frac{d}{dx^2} = \frac{1}{L^2} \frac{d}{d\hat{x}^2}.
$$
This simple "substitution" is not mathematically rigorous, but you could use the chain rule twice to obtain the same thing; since the change of variable consists only in the multiplication by a constant, this "heuristic" works.
The final expression is
$$
f(L \hat{x}) = \frac{1}{L^2} \frac{d}{d\hat{x}^2} w(L\hat{x}).
$$
If, for example, $w(x) = A \exp(Bx)$, we have $f(x) = AB^2 \exp(Bx)$. With the new variable,
$$
A (BL)^2 \exp(BL \hat{x}) = \frac{d}{d\hat{x}^2} A \exp(BL \hat{x}).
$$
If you define $\hat{B}=BL$, $\hat{f} = \hat{B}^2 \exp(\hat{B} \hat{x})$ and $\hat{w}=\exp(\hat{B} \hat{x})$, the equation is
$$
\hat{f} = \frac{d \hat{w}}{d\hat{x}^2} .
$$
Therefore, after you normalize $x$ with $\hat{x}$, you should also normalize the functions. If $w$ has units of Kelvin, for example, $f$ has units of $K/m^2$. After the normalization, both $\hat{w}$ and $\hat{f}$ are nondimensional.
Appendix: rigorous change of variable using chain rule
Let $x=L \hat{x}$. Therefore, $\hat{x}=x/L$. From the chain rule,
$$
\frac{dw}{dx} = \frac{d\hat{x}}{dx} \frac{dw}{d\hat{x}} = \frac{1}{L} \frac{dw}{d\hat{x}}
$$
and
$$
\frac{d^2w}{dx^2}=\frac{d}{dx} \left(\frac{dw}{dx}\right) = \frac{d\hat{x}}{dx} \frac{d}{d\hat{x}}\left(\frac{1}{L} \frac{dw}{d\hat{x}} \right) = \frac{1}{L^2} \frac{d^2 w}{d \hat{x}^2}.
$$ | {"set_name": "stack_exchange", "score": 1, "question_id": 3021356} |
TITLE: Trace matrix inequality
QUESTION [2 upvotes]: Let $A,B$ be positive definite matrices, and assume that
$$
a_{i,j}<{b_{i,j}}
$$
for all $1\leq i,j\leq n$, where $a_{i,j}$ is the $(i,j)$ element of the matrix $A$ and $b_{i,j}$ is the $(i,j)$ element of the matrix $B$. Is it true that
$$
\text{trace}(A^{-1})\geq\text{trace}(B^{-1}).
$$
?
Thank you.
REPLY [1 votes]: No, it isn't true.
Take $A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, B = \begin{bmatrix} 2 & \sqrt{\frac{10}{3}} \\ \sqrt{\frac{10}{3}} & \frac{7}{3} \end{bmatrix}$. Trace of $B$ is $\frac{13}{3}$ and determinant of B is $\frac{4}{3}$, therefore its eigenvalues are $4$ and $\frac{1}{3}$, and
$$trace(B^{-1}) = \frac{1}{4} + 3 > 2 = trace(A^{-1})$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 323899} |
\section{The main result}
\subsection{The Chari-Loktev bases for local Weyl modules in type A}\label{ss:clbase}
In this subsection, we recall the bases given by Chari and Loktev \cite{CL} in terms of POPs (see \cite{RRV2}).
Fix notation and terminology as in~\S\ref{notn}.
\subsubsection{}\label{sss:cl}
Let $d$, $d^\prime$ be non-negative integers and $\piseq$ be a partition that fits into
the rectangle $(d,d^\prime)$. For $\alpha\in R^+$, the monomial $x^\pm_{\alpha}(d,\,d',\,\piseq)$ corresponding to the complement of $\piseq$ is given by
$$x^\pm_{\alpha}(d,\,d',\,\piseq):= \big(\prod_{i=1}^{d} x^\pm_\alpha\otimes t^{d^\prime-\pi_i}\big).$$
Set $x_{i,j}^\pm(d,\,d',\,\piseq):=x_{\alpha_{i,j}}^\pm(d,\,d',\,\piseq)$ for all $1\leq i\leq j\leq r$.
\subsubsection{}
Let $\lambda\in P^+$ and ~$\pop$ be a POP with bounding sequence $\lseq$. Let
$d_{i,j},\, d'_{i,j}$, $1\leq i\leq j\leq r$, be the differences and $\pijiseq$, $1\leq i\leq j\leq r$, be the partition
overlay of $\pop$.
Define $\rhocl{\pop}\in \mathbf{U}(\mathfrak{n}^-\otimes\complex[t])$ as follows:
\begin{equation}\label{e:clmonom} \rhocl{\pop}:=
x_{1,1}^-(d_{1,1}, \,d^\prime_{1,1},\, \pijiseqone)\,
\big(\prod_{i=1}^2 x_{i,2}^-(d_{i,2},\, d^\prime_{i,2}, \,\pijiseqtwo)\big)\,\cdots\,
\big(\prod_{i=1}^r x_{i,r}^-(d_{i,r}, \,d^\prime_{i,r},\, \pijiseqr)\big).
\end{equation}
The order of the factors matters in the expression for $\rhocl{\pop}$.
Since $[x^-_{i,j}, x^-_{p,q}]=0, \forall\,\,1\leq i\leq p\leq q\leq j\leq r$, it is easy to see that
\begin{equation}\label{e:clmonom2}
\rhocl{\pop}=\big(\prod_{j=1}^r x_{1,j}^-(d_{1,j}, \, d^\prime_{1,j},\, \pijiseqoner)\big)\,
\big(\prod_{j=2}^r x_{{2}, j}^-(d_{2, j}, \, d^\prime_{2, j}, \,\underline{\pi(j)}^2)\big)\,\cdots\,
x_{r,r}^-(d_{r,r}, \, d^\prime_{r,r},\, \pijiseqrr).
\end{equation} Set $\rho_{\pop_{r+1}}:=1.$
We observe that $$\rho_{\pop_s}=\big(\prod_{j=s}^r x_{{s}, j}^-(d_{s, j}, \, d^\prime_{s, j},\, \pijsseq)\big)\,\rho_{\pop_{s+1}},\quad\,\,\forall \,\,s\in I.$$
Define $\vcl{\pop}:=\epsilon_\pop\,\rhocl{\pop} \,w_\lambda,$ where $\epsilon_\pop\in\{\pm1\}$ is defined in \S\ref{ss:MTp}.
The following theorem is proved in \cite{CL} (see \cite[Theorem 4.5]{RRV2} for the current formulation).
\begin{theorem}\label{t:cl}\cite{CL,RRV2}
The elements
$v_\pop$, $\pop$ belongs to the set $\popset_\lambda$ of POPs with
bounding sequence~$\lseq$, form a basis for the local Weyl module~$W(\lambda)$.
\end{theorem}
We shall call the bases given in the last theorem as the {\em Chari-Loktev (or CL)} bases.
\subsection{The main theorem: stability of the CL bases} We wish to study for $\lambda\in P^+$ and $k\in\mathbb{Z}_{\geq0}$, the compatibility of CL bases with respect to the embeddings
$W(\lambda)\hookrightarrow W(\lambda+k\theta)$ in $L(\Lambda_{i_\lambda})$ (see \S \ref{ss:inclweyl}).
We first recall the weight preserving embedding from $\pnotl$ into $\pkl$ given in \cite[Corollary~5.13]{RRV2} at the level of the parametrizing sets of these bases:
for ~$\pop\in\pnotl$, the shift $\pop^k$ of $\pop$ by $k$ be its image in $\pkl$.
For every $\lambda\in P^+$, we will fix the following choice of $w_\lambda$ in $L(\Lambda_{i_\lambda})$:
$$w_\lambda:=T_{\lambda}\,v_{\Lambda_0},$$
where $T_\lambda$ is the linear isomorphism from $L(\Lambda_0)\rightarrow L(\Lambda_{i_\lambda})$ defined in \S \ref{ss:tlamdef}.
\begin{lemma}\label{l:wt}
Let $\lambda\in P^+$ and $\pop\in\pnotl$.
Then
$\textup {the weight of}\,\, \vcl{\pop} \,\,\textup{in} \,\,L(\Lambda_{i_\lambda})\,\, \textup{is}$ $$t_{\textup{wt}\,\pop-\barLam}(\Lambda_{i_\lambda})-d(\pop)\delta, $$
where $\barLam$ denotes the restriction of $\Lamil$ to $\lieh$ .
\end{lemma}
\begin{proof}It is clear from the definition of $\vcl{\pop}$ that its weight in $L(\Lambda_{i_\lambda})$ is
\begin{align}
&t_{\lambda}(\Lambda_0)-\sum_{1\leq i\leq j\leq r}d_{i,j}\alpha_{i,j}+\big(\triangle(\pop)-\sum_{1\leq i\leq j\leq r}|\pijiseq|\big)\delta\nonumber\\
&=\Lambda_0+{\textup{wt}}\,\pop-\big(\frac{1}{2}(\lambda|\lambda)-\triangle(\pop)+\sum_{1\leq i\leq j\leq r}|\pijiseq|\big)\delta\nonumber\\
&=\Lambda_0+\textup{wt}\,\pop-\big(\frac{1}{2}({\textup{wt}}\,\pop|{\textup{wt}}\,\pop)+d(\pop)\big)\delta \label{e:lemwt1}
\end{align}
where the last equality follows from \eqref{e:maxarea}.
Since $\Lambda_{i_\lambda}$ is of level~$1$, we obtain using \cite[(6.5.3)]{K} that
\beq\label{e:tgL2}
t_{\textup{wt}\,\pop-\barLam}(\Lamil)=\Lambda_0+{\textup{wt}}\,\pop+\frac{1}{2}((\Lamil|\Lamil)-({\textup{wt}}\,\pop|{\textup{wt}}\,\pop))\delta.
\eeq
Since $(\Lamil|\Lamil)=0$, we get the result from \eqref{e:lemwt1}--\eqref{e:tgL2}.
\end{proof}
The following is immediate from Lemma \ref{l:wt} and \eqref{e:wtd}.
\begin{lemma}Let $\lambda\in P^+$, $\pop\in\pnotl$, and $k\in\mathbb{Z}_{\geq0}$. Then the basis vectors $\vcl{\pop}\in W(\lambda)$ and $\vcl{\pop^k}\in W(\lambda+k\theta)$ lie in the same weight space of $L(\Lambda_{i_\lambda})$.
\end{lemma}
It is not true that $\vcl{\pop}$ and $\vcl{\pop^k}$ are equal as elements of $L(\Lambda_{i_\lambda})$ (see \cite[Example 1]{RRV1}).
We will however see below that $\vcl{\pop}=\vcl{\pop^k}$ for all {\em stable} $\pop$. More precisely, let
$$\mathbb{P}^{\textup{stab}}(\lambda):=\{\pop\in\pnotl:d_{\ell,\ell}(\pop)\geq\depthpl,\,\, \forall \,\,1\leq \ell\leq r\} \,\,(\textup{see}\,\,\S\S\ref{s:patterns}-\ref{s:pops}).$$
The following theorem is the main result of this paper.
\begin{theorem}\label{MT} Let $\lieg=\mathfrak{sl}_{r+1}$.
Let $\lambda\in P^+$ and $\pop\in\mathbb{P}^{\textup{stab}}(\lambda)$. Then
$$\vcl{\pop^k}=\vcl{\pop} \qquad\textup{for all}\,\, k\in\mathbb{Z}_{\geq0},$$
i.e., they are equal as elements of $L(\Lambda_{i_\lambda})$.
\end{theorem}
This theorem is proved in \S\ref{pf:MT}.
\begin{remark}
Thorem \ref{MT} is conjectured in \cite[Conjecture~6.1]{RRV2} and the $r=1$ case is proved in \cite[Theorem~6]{RRV1} under the additional assumption that
$$d(\pop)\leq\begin{cases}
\textup{min}\{d_{1,1}(\pop),\, d^\prime_{1,1}(\pop)\}, &\lambda_1 \,\,\textup{even},\\
\textup{min}\{d_{1,1}(\pop),\, d^\prime_{1,1}(\pop)-1\}, &\lambda_1\,\, \textup{odd}.
\end{cases}$$
\end{remark}
\subsection{Bases for level one representations of $\widehat{\lieg}$}\label{ss:basesforllami}
Fix $i\in\widehat{I}$, $\gamma\in Q$, and $d\in\mathbb{Z}_{\geq0}$. Consider the irreducible module $L(\Lambda_i)$ and its weight space of weight
$t_{\gamma}(\Lambda_i)-d\delta$.
Set $\mu=\varpi_i+\gamma$, the restriction of $t_{\gamma}(\Lambda_i)-d\delta$ to $\csa^*$.
Let $\lambda\in P^+$ such that $\mu$ is a weight of the corresponding irreducible representation $V(\lambda)$ of $\lieg$.
Note that $i_\lambda=i$.
For $k\in\mathbb{Z}_{\geq0}$, from Lemma \ref{l:wt}, we get that
the CL basis indexing set for $W(\lambda+k\theta)_{t_{\gamma}(\Lambda_i)-d\delta}$ is the set $\plmkd$ of POPs with
bounding sequence $\lseq+k\tseq$ with weight $\mu$ and depth $d$.
From \cite[Theorem~5.10]{RRV2}, for $k\geq d$, there exist a bijection from the set $\rpartd$ of all $r$-colored
partitions of $d$ onto $\plmkd$. Since this bijection is produced by the ``shift by $k$'' operator, we have
\beq\label{e:cruforstabbasis}
d_{\ell,\ell}(\pop)\geq k,\quad\forall\,\, 1\leq \ell\leq r, \qquad \textup{for every}\,\,\pop\in\plmkd.
\eeq
For $k\geq d$,
by Proposition \ref{wtsbasicrep}, we now have
$$W(\lambda+k\theta)_{t_{\gamma}(\Lambda_i)-d\delta}= L(\Lambda_i)_{t_{\gamma}(\Lambda_i)-d\delta},$$
and the set $\mathcal{B}_{\gamma, d}:=\{v_\pop:\pop\in\plmkd\}$ is a basis for $L(\Lambda_i)_{t_{\gamma}(\Lambda_i)-d\delta}$.
By Theorem \ref{MT}, using \eqref{e:cruforstabbasis}, the set $\mathcal{B}_{\gamma, d}$ is independent of the choice of $k$ for any $k\geq d$.
Finally, to obtain a basis for $L(\Lambda_i)$, we take the disjoint union over the weights of $L(\Lambda_i)$:
$$\mathcal{B}:=\bigsqcup_{\gamma, d}\mathcal{B}_{\gamma, d}.$$
We may view $\mathcal{B}$ as a direct limit of the CL bases for the Demazure submodules of $L(\Lambda_i)$. | {"config": "arxiv", "file": "1612.01484/main_results.tex"} |
TITLE: Why does a right-circular cylinder helps reduce surface area of the former International Prototype of Kilograms
QUESTION [0 upvotes]: I read on Wikipedia that right-circular cylinder shape helps reduce surface area of the former IPK, but could not find an explanation as to why. So how does such shape helps reduce its surface area? Wouldn't a spherical shape be better for that purpose?
REPLY [3 votes]: Though it's not precisely clear, it's possible that they meant something like this:
The surface area of a cylinder of unit volume is minimized when its height is equal to its diameter.
So it wasn't claiming that a cylinder was the minimum-surface-area shape, but rather that a cylinder of those particular dimensions had a lower surface area than cylinders of the same volume with different dimensions.
This statement can be proved fairly straightforwardly: a cylinder of diameter $D$ and height $H$ has volume $V=\frac{1}{4}\pi D^2H$ and surface area $S=\frac{1}{2}\pi D^2+\pi DH$. If we fix the volume to be a constant $V_0$, then we have that $H=\frac{4V_0}{\pi D^2}$. Substituting, we get the surface area as a function of the diameter:
$$S=\frac{1}{2}\pi D^2+\frac{4V_0}{D}$$
To minimize $S$, we first find points where the derivative with respect to $D$ is zero:
$$\frac{dS}{dD}=\pi D-\frac{4V_0}{D^2}=0$$
The solution to this is $D=(4V_0/\pi)^{1/3}$. To see whether this is a maximum or a minimum, we can find the second derivative at that point:
$$\frac{d^2S}{dD^2}=\pi+\frac{8V_0}{D^3}$$
This is always positive for $D>0$, so this is indeed a minimum.
So the diameter at which surface area is minimized is $D=(4V_0/\pi)^{1/3}$. This means that the height at which surface area is minimized, for a constant volume, is:
$$H=\frac{4V_0}{\pi(4V_0/\pi)^{2/3}}=(4V_0/\pi)^{1/3}=D$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 579216} |
TITLE: Limit of partial sum
QUESTION [2 upvotes]: I am trying to find the limit of this infinite sequence:
$$\lim_{n \rightarrow\infty} \frac{1}{n}\left(\sqrt{\frac{1}{n}}+\sqrt{\frac{2}{n}}+\sqrt{\frac{3}{n}}+\ldots+1\right)$$
I can see that:
$$\left(\sqrt{\frac{1}{n}}+\sqrt{\frac{2}{n}}+\sqrt{\frac{3}{n}}+\ldots+1\right) \lt n$$
So the whole expression is bounded by $1$, but I am having a hard time finding the limit. Any help pointing me into the right direction will be appreciated.
REPLY [1 votes]: $\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Another interesting approach is the use of Stolz-Ces$\grave{a}$ro Theorem:
\begin{align}
&\color{#f00}{\lim_{n \to \infty}{1 \over n}\pars{\root{1 \over n} +
\root{2 \over n} + \root{3 \over n} + \cdots + 1}} =
\lim_{n \to \infty}{1 \over n}\sum_{k = 1}^{n}\root{k \over n} =
\lim_{n \to \infty}{1 \over n^{3/2}}\sum_{k = 1}^{n}k^{1/2}
\\[5mm] = &\
\lim_{n \to \infty}{\pars{n + 1}^{1/2} \over \pars{n + 1}^{3/2} - n^{3/2}} =
\lim_{n \to \infty}{\pars{n + 1}^{1/2}\bracks{\pars{n + 1}^{3/2} + n^{3/2}} \over
\pars{n + 1}^{3} - n^{3}}
\\[5mm] = &\
\lim_{n \to \infty}{\pars{n + 1}^{1/2}\bracks{\pars{n + 1}^{3/2} + n^{3/2}} \over
3n^{2} + 3n + 1}
=
\color{#f00}{2 \over 3}\lim_{n \to \infty}\bracks{%
{\pars{1 + 1/n}^{1/2} \over
1 + 1/n + 1/\pars{3n^{2}}}\,{\pars{1 + 1/n}^{3/2} + 1 \over 2}} = \color{#f00}{2 \over 3}
\end{align} | {"set_name": "stack_exchange", "score": 2, "question_id": 1938882} |
TITLE: Prove that $\int_a^bf(x)dx=\int_{a+c}^{b+c}g(x)dx$
QUESTION [0 upvotes]: I am having trouble with this problem.
f be integrable on $[a,b]$. Suppose $c\in R$ and $g: [a+c,b+c] \to R$ such that $g(x)=f(x-c), x\in[a+c,b+c]$
$$\int_a^bf(x)dx=\int_{a+c}^{b+c}g(x)dx$$
I am thinking that this problem makes use of the change of variable formula to solve. Can you give me some idea? Thank you very much!
REPLY [0 votes]: Indeed, if you set $t=x+c$, the jacobian is $1$ and thus
$$\int_a^b f(x)dx=\int_{a+c}^{b+c}f(t-c)dt=\int_{a+c}^{b+c}g(t)dt.$$
REPLY [0 votes]: You simply take $y=x+c$ (or $x=y-c$) which gives $\frac{dy}{dx}=1$ or $dy=dx$ and the new boundaries are $y=a+c$ and $y=b+c$ (because the old boundaries were $x=a$ and $x=b$) and the integral becomes
$$\int_a^bf(x)dx = \int_{a+c}^{b+c}f(y-c)dy = \int_{a+c}^{b+c} g(y) dy = \int_{a+c}^{b+c} g(x)dx$$
where in the last step I renamed the dummy variable $y$ to $x$. | {"set_name": "stack_exchange", "score": 0, "question_id": 4442999} |
TITLE: Calculate the maximum in the Collatz sequence
QUESTION [8 upvotes]: Consider the notorious Collatz function
$$ T(n) = \begin{cases}(3n+1)/2&\text{ if $n$ is odd,}\\n/2&\text{ if $n$ is even.}\end{cases} $$
One of the most important acceleration techniques of the convergence test is the usage of a sieve (test $k$ least significant bits of $n$, the sieve has the size of $2^k$ entries), and test only those numbers that do not join the path of a lower number in $k$ steps. This technique is greatly explained, e.g., here or here.
For example, consider the sieve for $k=2$ and particularly the numbers of the form $4n+1$ which join the path of $3n+1$ in two steps. Their path is
$$ 4n+1 \rightarrow 6n+2 \rightarrow 3n+1 \text{.}$$
What I don't understand is how this can be used to search for the highest number occurring in the sequence (path records in the terminology of Eric Roosendaal). The sieve cuts the calculation before the computation of any intermediate value (which can actually be the maximum, like the value $6n+2$ in the above example). How can I detect that $4n+1$ does lead to a maximum if no $6n+2$ is computed? Testing the path of $3n+1$ no longer makes sense since the maximum $6n+2$ occurs before this term. Am I missing something?
REPLY [2 votes]: (Notation: residue $n_0\mod 2^{\lceil i \log_23\rceil}$ = residue $b\mod2^k$ from your wiki page)
About the "discarded" 5 reaching maximum 8 (or 16), already reached by "surviving" 3:
One of the discarded sequence is the inverse V-Shape sequence which rise for $i$ steps of $f(x)=\frac{3x+1}{2}$ and then fall bellow the initial value by successive division by $2$ (See here). Of all discarded sequences $2^{\lceil i \log_23\rceil}n+n_0$ for a specific $n$, this is the type of sequence that potentially reaches the highest value:
$$(2^{\lceil i \log_23\rceil}n+n_0+1)\frac{3^i}{2^{i}}-1$$
Note: $n_0\leq 2^{\lceil i \log_23\rceil}-3$ and the exact value can be found in the link above
e.g. with $4n+1=5$ where $n_0=1$, $i=1$,$n=1$ which reaches $8$ before dropping to $4<5$
One of the surviving sequence is the straight line which rise for the whole $k={\lceil i \log_23\rceil}$ steps of $f(x)=\frac{3x+1}{2}$. Of all surviving sequences for a specific $n$, this is the sequence (starting from $2\cdot2^{\lceil i \log_23\rceil}n-1$) that reaches the highest value (limited to $k={\lceil i \log_23\rceil}$ steps):
$$3^{\lceil i \log_23\rceil}(n+1)-1$$
Note: here we always have $n_0= 2^{\lceil i \log_23\rceil}-1$
e.g. with $4n+3=7$ where $i=1$,$n=1$ which reaches $17$ (in 2 steps), or with $n=0$: $3$ reaches $8$
Now it is easy to show that the highest value that can be reached by a discarded sequence at $n$ is smaller (or equal) than the highest value already reached by a surviving sequence at $n-1$
e.g with discarded $4(1)+1=5$ reaches $8$ which was already reached by surviving $4(1-1)+3=3$
Surviving highest value at $n-1$ is greater then discarded value at $n$?
$$3^{\lceil i \log_23\rceil}n-1 \geq (2^{\lceil i \log_23\rceil}n+n_0+1)\frac{3^i}{2^{i}}-1$$
and with $n_0< 2^{\lceil i \log_23\rceil}-1$, we just need to show that
$$3^{\lceil i \log_23\rceil}n-1 \geq (2^{\lceil i \log_23\rceil}(n+1))\frac{3^i}{2^{i}}-1$$
$$\Big(\frac{3}{2}\Big)^{\lceil i \log_23\rceil}n \geq \Big(\frac{3}{2}\Big)^i(n+1)$$
$$\Big(\frac{3}{2}\Big)^{\lceil i \log_2\frac{3}{2}\rceil} \geq 1+\frac{1}{n}$$
which is already true for $n-1=0$ when $i\geq 3$ (manually checked for $i=1$ and $i=2$ by using the exact value of $n_0$ in those cases)
e.g. with $n-1=0$: discarded $32n+23$ reaches $188$ but surviving $32(n-1)+31$ already reached $242$
Note: you can multiply both side by 2 to get the "real" maximum (16 instead of 8).
The key idea is that even if the discarded inverse V-Shape at $n$ was at the highest possible residue $n_0= 2^{\lceil i \log_23\rceil}-3$, it would reach a smaller value than the straight line at $n-1$ (always with residue $n_0= 2^{\lceil i \log_23\rceil}-1$).
This means that record paths are always found in residue $b\mod2^k$ (in other word, at $2^k\cdot n+b$ with $n=0$)
EDIT:
even more, when sieving $2^{k+1}$: values below $2^k$ that are dropping cannot produce new path records (obviously), but value above $2^k$ that are not surviving after $2^{k+1}$ sieve are now known, and there maximum is still the RHS above:
indeed the condition $n_0+2^{\lceil i \log_23\rceil}< 2^{\lceil i \log_23\rceil+1}-1$ or $n_0< 2^{\lceil i \log_23\rceil}-1$ do not change, and the value of $i$ (climbing steps) neither since the last step was a drop bellow initial value.
So even if the max value on the LHS do not climb anymore at step $k+1$, it would still be higher (the whole equation would stay the same).
This means that new record paths are only found in surviving residue $b\mod2^k$
No need to check discarded residue at all, even within the sieve range. | {"set_name": "stack_exchange", "score": 8, "question_id": 3454674} |
\begin{document}
\begin{frontmatter}
\title{In Catilinam IV\thanksref{footnoteinfo}}
\thanks[footnoteinfo]{This paper was not presented at any IFAC
meeting. Corresponding author M.~T.~Cicero. Tel. +XXXIX-VI-mmmxxi.
Fax +XXXIX-VI-mmmxxv.}
\author[Paestum]{Marcus Tullius Cicero}\ead{cicero@senate.ir},
\author[Rome]{Julius Caesar}\ead{julius@caesar.ir},
\author[Baiae]{Publius Maro Vergilius}\ead{vergilius@culture.ir}
\address[Paestum]{Buckingham Palace, Paestum}
\address[Rome]{Senate House, Rome}
\address[Baiae]{The White House, Baiae}
\begin{keyword}
Cicero; Catiline; orations.
\end{keyword}
\begin{abstract}
Cum M.~Cicero consul Nonis Decembribus senatum in aede Iovis
Statoris consuleret, quid de iis coniurationis Catilinae sociis
fieri placeret, qui in custodiam traditi essent, factum est, ut
duae potissimum sententiae proponerentur, una D.~Silani consulis
designati, qui morte multandos illos censebat, altera C.~Caesaris,
qui illos publicatis bonis per municipia Italiae distribuendos
ac vinculis sempiternis tenendos existimabat.
\end{abstract}
\end{frontmatter}
\section{Introduction}
Video, patres conscripti, in me omnium vestrum ora atque oculos esse
conversos, video vos non solunn de vestro ac rei publicae, verum
etiam, si id depulsum sit, de meo periculo esse sollicitos. Est mihi
iucunda in malis et grata in dolore vestra erga me voluntas, sed eam,
per deos inmortales, deponite atque obliti salutis meae de vobis ac
de vestris liberis cogitate. Mihi si haec condicio consulatus data
est, ut omnis acerbitates, onunis dolores cruciatusque perferrem,
feram non solum fortiter, verum etiam lubenter, dum modo meis
laboribus vobis populoque Romano dignitas salusque pariatur.
\begin{figure}
\begin{center}
\includegraphics[height=4cm]{jcaesar.eps}
\caption{Gaius Julius Caesar, 100--44 B.C.}
\label{fig1}
\end{center}
\end{figure}
\subsection{A subsection}
Marcus Tullius Cicero, 106--43 B.C. was a Roman statesman, orator,
and philosopher. A major figure in the last years of the Republic,
he is best known for his orations against Catiline\footnote{
This footnote should be very brief.}
and for his mastery of Latin prose \cite{Heritage:92}. He was a
contemporary of Julius Caesar (Fig.~\ref{fig1}).
\section{The argument}
Some words might be appropriate describing equation~(\ref{e1}), if
we had but time and space enough.
\begin{equation} \label{e1}
{{\partial F}\over {\partial t}} =
D{{\partial^2 F}\over {\partial x^2}}.
\end{equation}
See \cite{Abl:56}, \cite{AbTaRu:54}, \cite{Keo:58} and
\cite{Pow:85}.
This equation goes far beyond the celebrated theorem ascribed to the great
Pythagoras by his followers.
\begin{thm}
The square of the length of the hypotenuse of a right triangle equals the sum of the squares
of the lengths of the other two sides.
\end{thm}
\section{Epilogue}
A word or two to conclude, and this even includes some inline
maths: $R(x,t)\sim t^{-\beta}g(x/t^\alpha)\exp(-|x|/t^\alpha)$.
\begin{ack}
Partially supported by the Roman Senate.
\end{ack}
\bibliographystyle{plain}
\bibliography{autosam} | {"config": "arxiv", "file": "2012.10726/autosam.tex"} |
TITLE: probability of playing music player on shuffle and listening to every song.
QUESTION [0 upvotes]: I have a few problems I am trying to work out but I am not totally confident in my answers:
The problem is such:
Suppose you have a playlist consisting of four songs. You play your playlist in shuffle mode. In this mode, after the current song is played, the next song is chosen randomly from the
other three tracks. This ensures you never hear the same song twice in a row.Let X be the number of songs you listen to until you've heard all the four different songs.
1.How many sequences of 4 songs are there where no song plays twice in a row? If we label
the songs {A, B, C, D}, then examples are ABCD and ABAB but not ABBA.
For this problem I just thought the answer was (4^4) = 256 Does this make sense?
2.
I have to find the value of P(X=4). to do this I used the formula n!/(n^n) because (n^n) is the possible sequences of n songs, and because the possible sequence of n songs including every song is n!.
So my answer was: P(X=4) = 24/254 = 3/32
I am trying to understand really how this problem works, and I would like some more insight as to if these answers make sense/ how I should be tackling a problem like this. How would I compute problems like these?
Any help is appreciated.
REPLY [0 votes]: There are n=4 songs in the queue. With the condition that once a song is played, next song is picked randomly from the remaining songs.
$P_1$ = Probability of selecting 1st song as unique song from the given 4 songs = 4/4 = 1
$P_2$ = Probability of selecting 2nd song as unique song from the remaining 3 songs = 3/3 = 1
$P_3$ = Probability of selecting 3rd song as unique song from the remaining 3 songs = 2/3 (as 1 of them has already been played)
$P_4$ = Probability of selecting 4th song as unique song from the remaining 3 songs = 1/3 (as 2 of them has already been played)
Let P(X=k | n=4) denotes the probability of listening 'k' songs until we have listened all 4 different tracks.
Obviously,
For k<n,
P(X=k)=0
P(X=4) = $P_1$ * $P_2$ * $P_3$ * $P_4$ = 2/9
P(X=5) = P(X=4) * (1/3 + 2/3) = 2/9
P(X=6) = P(X=4) * [$(1/3+2/3)^2$-2/9] = 14/81 | {"set_name": "stack_exchange", "score": 0, "question_id": 752588} |
TITLE: Conditionally convergent sequences and implications
QUESTION [1 upvotes]: If I have $\sum b_n$ is conditionally convergent, how can I show that $\sum b_{4n}$ doesn't in general converge?
Assume $(b_n)$ is an arbitrary sequence of the Reals
All I need is a counter example right?
REPLY [1 votes]: Indeed, one has to find a counter-example in other to show that $S':=\sum_{n=0}^\infty a_{2n}$ may not converge.
If $\sum_{n=1}^{\infty} |a_n|$ is convergent, then so is $\sum_{n=1}^{\infty} |a_{2n} |$ and $S'$ converges. So, if $(a_n)_{n\geqslant 1}$ is a counter-example, then necessarily $\sum_{n=1}^{+\infty}|a_{n} |$ does not converge.
But $S:=\sum_{n=0}^{\infty}a_n$ has to be convergent; therefore, the convergence is due to a compensation.
One can think to a series such that the sign of the $n$th term is the opposite of that of the $(n+1)$th. This is called alternating series. So write $a_n :=(-1)^nb_n$, where $b_n\geqslant 0$. We want the convergence of $\sum_{n=1}^\infty (-1)^nb_n$ but not that for $\sum_{n=1}^\infty (-1)^{2n}b_{2n}=\sum_{n=1}^\infty b_{2n}$. We can finally choose $b_n :=1/n$. | {"set_name": "stack_exchange", "score": 1, "question_id": 1231896} |
TITLE: If there is a minimizer, can I show the function is quasi-convex
QUESTION [1 upvotes]: There are some early discussion here. (Thanks for @Umberto for his clear and nice comment!)
Now I reformed my problem. Please have a look.
Given $\|\cdot\|$ to be a norm over space $M$ and $(x(t))\subset M$ is a sequence of points for any $t\geq0$.
I know the function $t\to \|x(t)\|$ is convex and strictly decreasing w.r.t. $t$. Also, $\lim_{t\to\infty}\|(x(t))\|=c>0$.
I also know that $\lim_{n\to\infty}\|x(t)-x_0\|=c'>\|x(t')-x_0\|$ where $x_0$ is a point in $M$ and $t'>0$ is finite. i.e., the minimizer of below problem can not happen as $t\to\infty$.
Moreover, I have, there exists a point $x_a$ so that the function $t\to \|x(t)-x_a\|$ is strictly increasing.
My question:
can I show the uniqueness and existence of minimizer $x(t_0)$?
$$
x(t_0):=\operatorname{argmin}_{t\geq 0}\|x(t)-x_0\|
$$
can I show that function $t\to \|x(t)-x_0\|$ is quasi-convex?
REPLY [0 votes]: By your assumptions, there exist $M > 0$, such that the minimizing $t$ is between $0$ and $M$. Hence, there exists a sequence $\{t_n\} \subset [0,M]$ with
$$\|x(t_n) - x_0\| \to \inf_{t \ge 0} \| x(t) - x_0 \|.$$
W.l.o.g., the sequence $\{t_n\}$ converges to some $\bar t$. Since $t \mapsto x(t)$ is continuous, we find
$$\|x(t_n) - x_0\| \to \|x(\bar t) - x_0\| = \inf_{t \ge 0}\|x(t)-x_0\|.$$
I think this is not possible. In $\mathbb{R}^2$ consider a spiral
$$x(t) = (t^{-1} \, \sin(t^{-1}), t^{-1} \, \cos(t^{-1}))$$ and some $x_0$. Then, $\|x(t) - x_0\|$ alternates between increasing and decreasing on some subintervals. | {"set_name": "stack_exchange", "score": 1, "question_id": 1511239} |
TITLE: Sequence of continuous function converging pointwise to continuous function is equicontinuous?
QUESTION [1 upvotes]: I've proven the following "theorem":
Let $I \subset \mathbb{R}$ be an interval, $(f_n: I \rightarrow \mathbb{R})_{n \in \mathbb{N}}$ be a family of continuous functions converging pointwise to a continuous function $f: I \rightarrow \mathbb{R}$ on $I$. Then: $(f_n)_{n \in \mathbb{N}}$ is equicontinuous on I.
Now my problem is, that here Equicontinuity of a pointwise convergent sequence of monotone functions with continuous limit additionally the $f_n$ have to be monotonic. So is my proof a generalization, or am I just missing something? Here is my proof:
Proof: Let $\epsilon > 0$. Observe first:
\begin{equation}
| f_n(x) - f_n(y) | \leq |f_n(x) - f(x)| + |f_n(y) - f(y)| + |f(x)- f(y)|
\end{equation}
Now there is by pointwise convergence of $(f_n)_{n \in \mathbb{N}}$ a $N \in \mathbb{N}$ such that for all $n \geq N$ we have $|f_n(x) - f(x)|<\frac{\epsilon}{3}$ and $|f_n(y) - f(y)| < \frac{\epsilon}{3}$. Further there is a $\delta > 0$ such that $|f(x) - f(y)| < \frac{\epsilon}{3}$ for $|x-y| < \delta$ by continuity of $f$. Hence we have shown, that there is a $N \in \mathbb{N}$ and a $\delta > 0$ such that for all $n \geq N$
\begin{equation}
|f_n(x) - f_n(y)| < \frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3} = \epsilon
\end{equation}
holds. Now let $n < N$. Then, by continuity of $f_n$ there is a $\delta_n$ such that $(x-y) < \delta_n$ implies $|f_n(x) - f_n(y)| < \epsilon$. Setting
\begin{equation}
\tilde{\delta} = \min_{n < N} \delta_n
\end{equation}
(which exists and is greater than $0$) we obtain, that for all $n < N$ the following holds:
\begin{equation}
|x - y| < \tilde{\delta} \Rightarrow |f_n(x) - f_n(y) | < \epsilon
\end{equation}
Setting now $\hat{\delta} = \min \{\delta, \tilde{\delta} \}$ we have, that for all $n \in \mathbb{N}$ the following holds:
\begin{equation}
|x-y| < \hat{\delta} \Rightarrow |f_n(x)- f_n(y) | < \epsilon
\end{equation}
Hence we have shown, that for all $\epsilon > 0$ there is a $\hat{\delta} > 0$ such that forall $n \in \mathbb{N}$ we have, that $|x-y| < \delta$ implies $|f_n(x) - f_n(y)| < \epsilon$.
REPLY [1 votes]: Your proof is wrong, because it basically impmies that every pointwise convergence of continuous functions to a continuous function is uniform. I think the flaw is that your $N$ depends both on $x$ (not important, it is fixed) but in $y$ as well! | {"set_name": "stack_exchange", "score": 1, "question_id": 3041672} |
TITLE: How to identify binary stars in $N$-body simulation?
QUESTION [3 upvotes]: Binary stars constitute a significant portion of the stars of a globular cluster.
I would like to verify that this is true in my $N$-body simulation, but I don't know how to decide whether a star in the system is a binary.
Visually this is easy to do, as binaries are identified as two stars at very close distance orbiting about their center of mass, but I need a mathematical condition which I can then translate to code.
REPLY [6 votes]: You'd need to calculate the binding energy of pairs of particles in your simulation. If for a pair this energy is negative then the pair is bound forming a binary system.
I assume you already have an effective way of calculating the potential, so this should not add much more execution time, since you just need to check for points that are close enough | {"set_name": "stack_exchange", "score": 3, "question_id": 362181} |
TITLE: How does a complex function plots a given circle line?
QUESTION [1 upvotes]: If I have a complex function $w = \frac{1}{z}$ and I have to show how does it display a circle line: $x^2+y^2 +2x-4y+1 = 0$. I am not sure how to do it. Here is my try:
$z = x+yi \Rightarrow w = \frac{1}{x + yi}$ Then $u(x;y) = \frac{1}{x}$ and $v(x;y) = \frac{1}{y}$. Then I express x from the circle line: $x(x+2) = y^2 -4y +1 \Rightarrow x_1 = y^2 -4y +1 $ and $x_2 = y^2 -4y -1$. From there I get a system: $$\begin{matrix} u = y^2-4y +1 \\ v = \frac{1}{y} \end{matrix} \Rightarrow y = \frac{1}{v} \Rightarrow u = \frac{v^2}{v-1}$$. I know that I am doing something completely wrong, because $u$ is the real part of the function and not a circle line, so I don't know how to show how would this function display the circle line indicated above. Any help would be appreciated!
REPLY [0 votes]: Let $\quad a\bar a=r^2+1\quad$ and consider the circle $\quad |z-a|=r$
It is transformed in the circle $\quad |w-\bar a|=r\quad $ by the mapping $w=\frac 1z$.
We have $r^2=|z-a|^2=(z-a)(\bar z-\bar a)=z\bar z-a\bar z-\bar az+a\bar a=\frac 1{w\bar w}(1-aw-\bar a\bar w+a\bar aw\bar w)$
Multiplying by $w\bar w$ this gives
$0=\underbrace{1}_{a\bar a-r^2}-aw-\bar a\bar w+\underbrace{(a\bar a-r^2}_{1})w\bar w=(\bar a-w)(a-\bar w)-r^2\iff |w-\bar a|^2=r^2$
And since norms are positive this is equivalent to $|w-\bar a|=r$
In our case $r=2$ and $a=-1+2i$ are verifying the conditions. | {"set_name": "stack_exchange", "score": 1, "question_id": 3646898} |
\begin{document}
\maketitle
\begin{abstract}
The aim of this note is to provide a pedagogical survey of the recent works
\cite{HHN,HHN2} concerning the local behavior of the eigenvalues of large
complex correlated Wishart matrices at the edges and cusp points of the spectrum: Under quite general conditions,
the eigenvalues fluctuations at a soft edge of the
limiting spectrum, at the hard edge when it is present, or at
a cusp point, are respectively described by mean of the Airy kernel, the Bessel kernel, or the Pearcey
kernel.
Moreover, the eigenvalues fluctuations at several soft edges are asymptotically independent.
In particular, the asymptotic fluctuations of the matrix condition number
can be described. Finally, the next order term of the hard edge
asymptotics is provided.
\end{abstract}
\setcounter{tocdepth}{2}
\section{The matrix model and assumptions}
Consider the $N\times N$ random matrix defined as
\eq
\label{main matrix model}
{\bv M}_N = \frac 1N {\bf X}_N {\bf \Sigma}_N{\bf X}_N^*
\qe
where ${\bf X}_N$ is an $N\times n$ matrix with independent and identically
distributed (i.i.d.) entries with zero mean and unit variance, and
${\bf \Sigma}_N$ is a $n\times n$ deterministic positive definite Hermitian
matrix.
The random matrix $\bv M_N$ has $N$ non-negative eigenvalues, but which may be
of different nature. Indeed, the smallest $N-\min(n,N)$ eigenvalues are
deterministic and all equal to zero, whereas the other $\min(n,N)$ eigenvalues
are random. The problem is then to describe the asymptotic behavior of the
random eigenvalues of $\bv M_N$, as both dimensions of ${\bf X}_N$ grow to
infinity at the same rate.
Let us mention that the $n\times n$ random covariance matrix
\[
\widetilde{\bv M}_N = \frac 1N
{\bf \Sigma}_N^{1/2} {\bf X}_N^* {\bf X}_N {\bf \Sigma}_N^{1/2}\ ,
\]
which is also under consideration, has exactly
the same random eigenvalues as $\bv M_N$, and hence results on the random
eigenvalues can be carried out from one model to the other immediately.
The global behavior of the spectral distribution of $\widetilde{\bv M}_N$ in
the large dimensional regime is known since the work of Mar\v cenko
and Pastur~\cite{MP}, where it is shown that this spectral distribution
converges to a deterministic probability measure $\mu$ that can be identified.
In this paper, we will be interested in the local behavior of the eigenvalues
of $\widetilde{\bv M}_N$ near the edge points and near the so-called cusp
points of the support of $\mu$. The former will be called the extremal
eigenvalues of $\widetilde{\bv M}_N$.
The random matrices ${\bv M}_N$ and $\widetilde{\bv M}_N$ are ubiquitous
in multivariate statistics~\cite{book-bai-chen-liang-2009}, mathematical finance
\cite{bouchaud-noise-dressing-99,guhr-credit-risk-2014}, electrical engineering and signal processing \cite{book-couillet-debbah}, etc. Indeed, in multivariate statistics, the performance
study of the Principal Component Analysis algorithms \cite{Joh01} requires the
knowledge of the fluctuations of the extremal eigenvalues of $\widetilde{\bv M}_N$. In mathematical finance, $\widetilde{\bv M}_N$ represents the empirical covariance matrix obtained from a sequence of asset returns. In signal processing, $\widetilde{\bv M}_N$ often stands for the empirical
covariance matrix of a spatially correlated signal received by an array of antennas, and source detection \cite{BDMN11,KN09}
or subspace separation \cite{MDCM} algorithms also rely on
the statistical study of these extremal eigenvalues.
In this article, except when stated otherwise, we restrict ourselves to the case of complex Wishart matrices. Namely, we make the following assumption.
\begin{assumption}
\label{ass:gauss}
The entries of $\bv X_N$ are i.i.d. standard complex Gaussian random variables.
\end{assumption}
Concerning the asymptotic regime of interest, we consider here the large random matrix regime, where the number of rows and columns of $\bv M_N$ both grow to infinity at the same pace. More precisely, we assume $n=n(N)$ and $n,N\to \infty$ in such a way that
\eq
\label{evdistrMN}
\lim_{N\rightarrow\infty} \frac{n}{N}=\gamma\in (0,\infty)\ .
\qe
This regime will be simply referred to as $N\to \infty$ in the sequel.
Turning to $\bv \Sigma_N$, let us denote by
$0<\lambda_1\leq \cdots\leq \lambda_n$ the eigenvalues of this matrix and let
\eq
\label{nuN}
\nu_N=\frac 1n \sum_{j=1}^n \delta_{\lambda_j}
\qe
be its spectral measure. Then we make the following assumption.
\begin{assumption}\
\label{ass:nu}
\begin{enumerate}
\item
The measure $\nu_N$ weakly converges towards a limiting probability measure
$\nu$ as $N\rightarrow\infty$, namely
\[
\frac{1}{n}\sum_{j=1}^n f(\lambda_j)\xrightarrow[N\to\infty]{} \int f(x)\nu(\d x)
\]
for every bounded and continuous function $f$.
\item
For $N$ large enough, the eigenvalues of $\bv \Sigma_N$ stay in a compact
subset of $(0,+\infty)$ independent of $N$, i.e.
\eq
0\ <\ \liminf_{N\rightarrow\infty}\lambda_1,\qquad
\sup_{N}\lambda_n\ <\ +\infty.
\qe
\end{enumerate}
\end{assumption}
Under these assumptions, a comprehensive description of the large $N$ behavior
of the eigenvalues of $\bv M_N$ can be made. To start with, we recall in
Section~\ref{section global} some classical results describing the global
asymptotic behavior of these eigenvalues, as a necessary step for studying
their local behavior. We review the results of Mar\v cenko-Pastur~\cite{MP} and
those of
Silverstein-Choi~\cite{SC}, which show among other things that the spectral
measure of $\bv M_N$ converges to a limit probability measure $\mu$, that $\mu$
has a density away from zero, that the support of $\mu$ can be delineated, and
that the behavior of the density of $\mu$ near the positive endpoints (soft
edges) of this support can be characterized. We moreover complete the picture
by describing the behavior of the limiting density near the origin when it is
positive there (hard edge), and also when it vanishes in the interior of the
support (cusp point). The latter results are extracted from~\cite{HHN2}.
Next, in Section \ref{section local} we turn to the eigenvalues local
behavior. More precisely, we investigate the behavior of the random eigenvalues
after zooming around several points of interest in the support, namely the soft
edges, the hard edge when existing, and the cusp points. In a word, it is shown
in the works~\cite{HHN,HHN2} that the Airy
kernel, the Bessel kernel, and the Pearcey kernel describe the local statistics
around the soft edges, the hard edge, and the cusp points respectively,
provided that a regularity condition holds true. In particular, the extremal
eigenvalues fluctuate according to Tracy-Widom laws.
In Section \ref{sec:proofs}, we provide sketches of proofs. We first recall an important expression for the kernel $\K_N$ associated
to the (random) eigenvalues of ${\bf M}_N$ and then outline how one can prove asymptotic convergence towards Airy, Pearcey or Bessel kernels by zooming around the points of interest: either a soft edge, a cusp point or the hard edge.
In Section \ref{OP}, we provide a list of open questions, directly related to the results of the paper.
\paragraph*{Acknowledgements.}
WH is pleased to thank the organizers of the {\em Journ\'ees MAS 2014} where the project of this note was initiated.
\begin{comment}
The works \cite{HHN,HHN2} under survey benefited from fruitful discussions with Folkmar Bornemann, Steven Delvaux, Manuela Girotti, Antti Knowles, Anthony Metcalfe, and Sandrine P\'ech\'e.
\end{comment}
During this work, AH was supported by the grant KAW 2010.0063 from the
Knut and Alice Wallenberg Foundation.
The work of WH and JN was partially supported by the program
``mod\`eles num\'eriques'' of the French Agence Nationale de la Recherche
under the grant ANR-12-MONU-0003 (project DIONISOS).
\section{Global behavior}
\label{section global}
Since the seminal work of Mar\v cenko and Pastur~\cite{MP}, it is known that
under Assumptions~\ref{ass:gauss} and~\ref{ass:nu} the spectral measure of
${\bv M}_N$ almost surely (a.s.) converges weakly towards a
limiting probability measure $\mu$ with a compact support. Namely we have
\eq
\label{mu}
\frac{1}{ N}\Tr f({\bv M}_N)\xrightarrow[N\to\infty]{a.s.}
\int f(x)\mu(\d x)
\qe
for every bounded and continuous function $f$. As a probability measure, $\mu$
can be characterized by its Cauchy transform: this is the holomorphic function
defined by
\[
m(z)=\int \frac{1}{z-\lambda}\,\mu(\d \lambda),\qquad
z\in \C_+ =\big\{z\in\C : \; {\rm Im}(z)>0\big\},
\]
and which takes its values in $\C_-=\{z\in\C : \; {\rm Im}(z)<0\}$. More precisely, for any open interval $I\subset\R$ with neither endpoints on an atom of $\mu$, we have the inversion formula
\[
\mu(I)=-\frac{1}{\pi} \lim_{\epsilon\to 0}\int_I {\rm Im} \big(m(x+i\epsilon)\big)\d x .
\]
For every $z\in\C_+$, the Cauchy transform $m(z)$ of $\mu$ happens to be the unique solution
$m \in \C_-$ of the fixed-point equation
\begin{equation}
\label{cauchy eq}
m = \left( z - \gamma \int \frac{\lambda}{1 - m \lambda} \nu(\d\lambda)
\right)^{-1} \ ,
\end{equation}
where $\gamma$ and $\nu$ were introduced in \eqref{evdistrMN} and Assumption~\ref{ass:nu}.
Moreover, using the free probability terminology \cite{AGZ,HP}, the limiting distribution $\mu$ is also known to be the free multiplicative convolution of the Mar\v cenko-Pastur law \eqref{mp}--\eqref{dens-mp} with $\nu$, and equation \eqref{cauchy eq} is a consequence from the subordination property of the multiplicative free convolution \cite{Cap}.
For example, in the case where $\nu = \delta_1$, which happens e.g. when $\bs\Sigma_N = I_n$, this equation
has an explicit solution and the measure $\mu$ can be recovered explicitly.
\eq
\label{mp}
\mu(\d x)=\left(1-\gamma\right)^+ \, \delta_0 + \rho(x)\d x,
\qe
where $x^+=\max(x,0)$ and the density $\rho$ has the expression
\begin{equation}
\label{dens-mp}
\rho(x)=\frac{1}{2\pi x}\sqrt{(\frak b-x)\big(x-\frak a)}\;\bs 1_{[\frak a,\frak b]}(x),\qquad \frak a=(1-\sqrt\gamma)^2,\qquad \frak b=(1+\sqrt\gamma)^2.
\end{equation}
This is the celebrated Mar\v cenko-Pastur law.
When $\nu$ has a more complicated form, it is in general impossible to obtain an
explicit expression for $\mu$, except in a few particular cases. Nonetheless,
it is possible to make a detailed analysis of the properties of this measure,
and this analysis was done by Silverstein and Choi in~\cite{SC}. These
authors started by
showing that $\lim_{z\in\C_+ \to x} m(z) \equiv m(x)$ exists for every
$x \in \R^* = \R - \{ 0 \}$. Consequently, the function $m(z)$ can be
continuously extended to $\C_+ \cup \R^*$, and furthermore, $\mu$ has a
density on $\R^*$ defined as $\rho(x) = - \pi^{-1} {\rm Im} \, (m(x))$. We still have the representation
\eq
\label{densite}
\mu(\d x)=\left(1-\gamma\right)^+ \, \delta_0 + \rho(x)\d x
\qe
with this new $\rho$, making $\rho(x) \d x$ the limiting distribution of the random eigenvalues
of $\bv M_N$. As is common in random matrix theory, we shall refer to
the support of $\rho(x)\d x$ as the \textbf{bulk}; we will denote (with a slight abuse of notation) its support by $\supp(\rho)$.
Silverstein and Choi
also showed that $\rho$ is real analytic wherever it is positive, and they
moreover characterized the compact support $\supp(\mu)$ following the ideas
of~\cite{MP}. More specifically, one can see that the function $m(z)$
has an explicit inverse (for the composition law) on $m(\C_+)$ defined by
\begin{equation}
\label{g(m)}
g(m) = \frac{1}{m} +
\gamma \int \frac{\lambda}{1 - m \lambda} \nu(\d\lambda) ,
\end{equation}
and that this inverse extends to $\C_- \cup D$ and is real analytic on $D$,
where $D$ is the open subset of the real line
\begin{equation}
\label{D}
D=\bigl\{x\in\R : \ x\neq 0, \ x^{-1}\notin \supp(\nu)\bigr\} .
\end{equation}
It was proved in \cite{SC} that
\[
\R - \supp(\rho) = \big\{ g(m) : \; m\in D, \ g'(m) < 0 \big\} .
\]
An illustration of these results is provided by Figures~\ref{fig:g2bulks}
and~\ref{fig:f2bulks}.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{g2bulks.pdf}
\caption{Plot of $g:D\to \mathbb{R}$ for $\gamma=0.1$ and
$\nu = 0.7\delta_1 + 0.3\delta_3$. In this case,
$D~=~(-\infty,0)~\cup~(0,\frac 13)~\cup~ (\frac 13,1)~\cup~(1,\infty)$.
The two thick segments on the vertical axis represent $\supp(\rho)$.}
\label{fig:g2bulks}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{f2bulks.pdf}
\caption{Plot of the density $\rho$ in the framework of
Figure~\ref{fig:g2bulks}.}
\label{fig:f2bulks}
\end{figure}
Of interest in this paper are the left edges, the right edges and the cusp
points of $\supp(\rho)$.
\noindent
A \textbf{left edge} is a real number $\frak a$ satisfying for every $\delta>0$ small enough
\[
\int_{\frak a-\delta}^\frak{a}\rho( x)\d x=0,\qquad \int_{\frak a}^\frak{a+\delta}\rho( x)\d x>0\ .
\]
A \textbf{right edge} is a real number $\frak a$ satisfying for every $\delta>0$ small enough
\[
\int_{\frak a-\delta}^\frak{a}\rho(x)\d x>0,\qquad \int_{\frak a}^\frak{a+\delta}\rho( x)\d x=0\ .
\]
A \textbf{cusp point} is a real number $\frak a$ such that $\rho(\frak a)=0$ and, for every $\delta>0$ small enough,
\[
\int_{\frak a-\delta}^\frak{a}\rho(x)\d x>0\quad \textrm{and}\quad \int_{\frak a}^\frak{a+\delta}\rho(x)\d x>0\ .
\]
Of course all edges and cusp points are positive numbers, except perhaps the
leftmost edge. When the leftmost edge is the origin, it is common in random
matrix theory to refer to it as the \textbf{hard edge}. In contrast, any
positive edge is also called a \textbf{soft edge}.
The results of~\cite{SC} summarized above show that the study of the map $g$ on
the closure $\overline D$ of $D$ provides a complete description for the edges
and the cusp points. First, a right edge is either a local minimum of $g$
reached in $D$, or belongs to $g(\partial D)$, which means there is $\frak c
\in\partial D=\overline D\setminus D$ such that
$\lim_{x\to\frak c,\, x\in D}g(x)$
exists, is finite, and equals to that edge. In the former case, $\rho(x)$
behaves like a square root near the edge.
\begin{proposition}
\label{prop global R}
If $\frak a$ is a right edge, then either there is a unique $\frak c\in D$ such that
\eq
\label{g right}
g(\frak c)=\frak a, \qquad g'(\frak c)=0,\qquad g''(\frak c)>0,
\qe
or $\frak a\in g(\partial D)$. In the former case, we have
\eq
\label{sqrt right}
\rho(x)=\frac{1}{\pi}\left(\,\frac{2}{g''(\frak c)}\,\right)^{1/2}\big(\frak a-x\big)^{1/2}\;(1+o(1))\ , \qquad x\to\frak a_- \ .
\qe
Conversely , if $\frak c\in D$ satisfies \eqref{g right}, then $\frak a$ is a right edge and \eqref{sqrt right} holds true.
\end{proposition}
The case where an edge lies in $g(\partial D)$ turns out to be quite delicate.
In the forthcoming description of the eigenvalues local behavior near the
edges, we shall restrict ourselves to the edges arising as local minima of $g$,
see also Section \ref{OP} for further discussion. Notice also that if $\nu$ is
a discrete measure, as exemplified by Figures~\ref{fig:g2bulks}
and~\ref{fig:f2bulks}, then $g$ is infinite on $\partial D$ and in particular a right edge cannot belong to $g(\partial D)$:
the right edges are in this case in a one-to-one correspondence with the local
minima of $g$ on $D$.
The situation is similar for the soft left edges, except that they correspond
to local maxima.
\begin{proposition}
\label{prop global L}
If $\frak a>0$ is a left edge, then either there is a unique $\frak c\in D$ such that
\eq
\label{g left}
g(\frak c)=\frak a, \qquad g'(\frak c)=0,\qquad g''(\frak c)<0,
\qe
or $\frak a\in g(\partial D)$. In the former case, we have
\eq
\label{sqrt left}
\rho(x)=\frac{1}{\pi}\left(\frac{2}{-g''(\frak c)}\right)^{1/2}\big(x-\frak a\big)^{1/2}\;(1+o(1))\ , \qquad x\to\frak a_+\ .
\qe
Conversely, if $\frak c\in D$ satisfies \eqref{g left}, then $\frak a$ is a right edge and \eqref{sqrt left} holds true.
\end{proposition}
Propositions \ref{prop global R} and \ref{prop global L} have been established in \cite{SC}. We state below their counterparts for the hard edge and a cusp point.
The hard edge setting turns out to be similar to the soft left edge one, except that $\frak c$ is now located at infinity, and $\rho(x)$ behaves like an inverse square root near the hard edge. More precisely, observe that the map $g$ is holomorphic at $\infty$ and $g(\infty)=0$, in the sense that the map $z\mapsto g(1/z)$ is holomorphic at zero and vanishes at $z=0$. We also denote by $g'(\infty)$ and $g''(\infty)$ the first and second derivatives of the latter map evaluated at $z=0$.
\begin{proposition}
\label{prop global 0}
The bulk presents a hard edge if and only if
\eq
\label{g hard}
g(\infty)=0,\qquad g'(\infty)=0,\qquad g''(\infty)<0,
\qe
or equivalently if $\gamma=1$. In this case, we have
\eq
\label{sqrt hard}
\rho(x)=\frac{1}{\pi}\left(\frac{2}{-g''(\infty)}\right)^{-1/2} x^{-1/2}\;(1+o(1))\ ,\qquad x\to 0_+\ .
\qe
\end{proposition}
More precisely, we have the explicit formulas $g'(\infty)=1-\gamma$ and
$g''(\infty)=-2\gamma \int \lambda^{-1} \nu(\d \lambda)$. In particular the
statement $g''(\infty)<0$ is always true, and so is $g(\infty)=0$ as explained
above; we included them in \eqref{g hard} to stress the analogy
with~\eqref{g left}.
A simple illustration of Propositions~\ref{prop global R}
to~\ref{prop global 0} is provided by the Mar\v cenko-Pastur law.
From~\eqref{dens-mp}, one immediately sees that
$\rho(x)\sim (\frak b- x)^{1/2}$ as $x\to \frak b_-$, and that a
similar square root behavior near $\frak a$ holds if and only if $\frak a> 0$,
that is $\gamma\neq 1$. If $\gamma=1$, i.e., $\frak a=0$, then
$\rho(x)\sim x^{-1/2}$ as $x\to 0_+$ instead.
We now turn to the cusp points. Those who will be of interest here correspond
to inflexion points of $g$ where this function is non decreasing. Moreover, a cubic root behavior for the density $\rho(x)$ is observed near such a cusp point, hence justifying the terminology (we recall cusp usually refers to the curve defined by $y^2=x^3$).
\begin{proposition}
\label{prop global C} Let $\frak a$ be a cusp point, set $\frak c = m(\frak a)$ and assume $\frak c\in D$. Then
\eq
\label{g cusp}
g(\frak c)=\frak a,\qquad g'(\frak c)=0,\qquad g''(\frak c)=0,
\qquad \text{and} \; \ g'''(\frak c)>0.
\qe
Moreover,
\eq
\label{sqrt cusp}
\rho(x)=\frac{\sqrt 3}{2\pi}\left(\,\frac{6}{g'''(\frak c)}\,\right)^{1/3}\big|x-\frak a\big|^{1/3}\;(1+o(1))\ , \qquad x\to\frak a\ .
\qe
Conversely, if $\frak c\in D$ satisfies $g'(\frak c)=g''(\frak c)=0$, then
the real number $\frak a = g(\frak c) $ is a cusp point, $g'''(\frak c)>0$
and~\eqref{sqrt cusp} holds true.
\end{proposition}
Propositions \ref{prop global 0} and \ref{prop global C} appear
in~\cite{HHN2}. Proposition~\ref{prop global C} is illustrated in Figures~\ref{fig:gcusp}
and~\ref{fig:cusp}.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{gcusp.pdf}
\caption{Plot of $g$ for $\gamma\simeq 0.336$ and
$\nu = 0.7\delta_1 + 0.3\delta_3$. The thick segment on the vertical axis
represents $\supp(\mu)$. The point $\frak a$ is a cusp point.
}
\label{fig:gcusp}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{fcusp.pdf}
\caption{Plot of the density of $\mu$ in the framework of
Figure~\ref{fig:gcusp}.}
\label{fig:cusp}
\end{figure}
\section{Local behavior}
\label{section local}
The study of the eigenvalues local behavior of random matrices is a central
topic in random matrix theory.
When dealing with large Hermitian random matrices, it is recognized
that the local correlation of the eigenvalues around an edge where
the density vanishes like a square root should be described
by a particular point process involving the Airy kernel (see below), whose maximal particle's distribution is known as the Tracy-Widom law. For instance, this has been established for unitary invariant random matrices and for Wigner matrices as well, see e.g. the surveys \cite{Dei,Erd} and references therein. Similarly, the
Bessel kernel is expected to describe the fluctuations around an hard edge
where the density vanishes like an inverse square root, and the Pearcey kernel
around a cusp with cubic root behavior. Let us also mention that the sine
kernel is expected around a point where the density is positive, and that more
sophisticated behaviors have also been observed in matrix models where the
density vanishes like a rational power of different order, but we will not
further investigate these aspects here. Another interesting feature not covered by this survey is the study of the random eigenvectors, see e.g. \cite{BGN, BKYY}.
The purpose of this section is to present the central results of
\cite{HHN,HHN2}, where such typical local behaviors arise for the complex
correlated Wishart matrices under consideration at every edges and cusp points
satisfying a certain regularity condition.
In fact, the free parameter family $(\nu_N)$ may have a deep impact on the limiting local fluctuations and one may not recover the expected fluctuations without further conditions. A first manifestation of this phenomenon is the Baik-Ben Arous-P\'ech\'e (BBP) phase transition, that we present in Section \ref{BBP}.
In a nutshell, this phase transition yields that slight variations on the family $(\nu_N)$ may modify the fluctuations at a soft edge and may no longer be described by the Tracy-Widom law. Such phenomenas motivate the introduction of a regularity condition which essentially rules out this kind of behaviors.
In Section \ref{csq RC}, we provide the existence of finite $N$ approximations of the edges or cusp points under study and satisfying the regularity condition; the reader not interested in these precise definitions may skip this section.
Next, in Section \ref{section Airy} we introduce the Airy kernel, the Tracy-Widom law, and state our results concerning the soft edges. In Section \ref{section Bessel} , we introduce the Bessel kernel and describe the fluctuations at the hard edge. As an application, we provide in Section \ref{section condition} a precise description for the asymptotic behavior of the condition number of $\bv M_N$. Finally, in Section \ref{section Pearcey} we introduce the Pearcey kernel, and state our result concerning the asymptotic behavior near a cusp point.
\subsection{The BBP phase transition and the regularity assumption}
\label{BBP}
First, assume $\bv \Sigma_N$ is the identity matrix, so that the (limiting) spectral distribution $\nu$ of $\bv \Sigma_N$ is $\delta_1$, and hence the limiting density $\rho(x)$ is provided by~\eqref{dens-mp}.
If $x_{\max}$ stands for the maximal eigenvalue of $\bv M_N$, then it has been established that $x_{\max}$ converges a.s. towards the right edge and fluctuates at the scale $N^{2/3}$ according to the Tracy-Widom law \cite{Jo}. Next, following Baik, Ben Arous and P\'ech\'e \cite{BBP}, assume instead $\bv \Sigma_N$ is a finite rank additive perturbation of the identity, meaning that the rank of the perturbation is independent on $N$. Thus we still have $\nu=\delta_1$ and the limiting density $\rho(x)$ remains unchanged. They established that if the strength of the perturbation is limited, then the behavior for $x_{\max}$ is the same as in the non-perturbed case, see \cite[Theorem 1.1(a), $k=0$]{BBP}. On the contrary, if the perturbation is strong enough, then $x_{\max}$ converges a.s.~outside of the bulk and the fluctuations are of different nature, see \cite[Theorem 1.1(b)]{BBP}. But in this case, one can consider instead the largest eigenvalue that actually converges to the right edge and show that the Tracy-Widom fluctuations still occur (this is a consequence of Theorem~\ref{th:fluctuations-TW} below). However, they also established there is an intermediary regime, where $x_{\max}$ converges a.s. to the right edge and fluctuates at the scale $N^{2/3}$ but not according to the Tracy-Widom law, see \cite[Theorem 1.1(a), $k>1$]{BBP}, hence leaving the random matrix universality class since the right edge exhibits a square root behavior. Here the fluctuations are actually described by a deformation of the Tracy-Widom law, but in the general $\bv\Sigma_N$'s setting much exotic behaviors must be expected.
In conclusion, although the eigenvalues global behavior only depends on the limiting parameters $\nu$ and $\gamma$, the local behavior is quite sensitive in addition to the mode of convergence of the spectral measure $\nu_N$ of $\bv\Sigma_N$ to its limit $\nu$. In order to obtain universal fluctuations in the more general setting under investigation, it is thus necessary to add an extra condition for the edges, and actually for the cusp points too. A more precise consideration of the non-universal intermediary regime considered by Baik, Ben Arous and P\'ech\'e reveals that, if we write the right edge as $g(\frak c)$, see Proposition \ref{prop global R} and the comments below, then some of the inverse eigenvalues of $\bv\Sigma_N$ converge towards $\frak c$. Recalling the $\lambda_j$'s stand for the eigenvalues of $\bv\Sigma_N$, this motivates us to introduce the following condition.
\begin{definition} A real number $\frak c$ satisfies the \textbf{regularity condition} if
\eq
\label{RC}
\liminf_{N\to\infty}\min_{j=1}^n\left|\frak c-\frac{1}{\lambda_j}\right|>0.
\qe
Moreover, if $\frak c$ satisfies the regularity condition, we then say that $g(\frak c)$ is \textbf{regular}.
\end{definition}
\begin{remark}
\label{reg edge}
Propositions \ref{prop global R} and \ref{prop global L} tell us that every soft edge reads $g(\frak c)$ for some $\frak c\in \overline D$. In fact, since $g(0)=+\infty$ and $\mathrm{Supp}(\mu)$ is compact, necessarily $\frak c\neq 0$. If we moreover assume the soft edge to be regular then, since by definition $D = \{ x \in \R : x \neq 0,\, x^{-1} \not\in \supp(\nu) \}$ and because $\nu_N$ converges weakly to $\nu$, necessarily $\frak c\in D$. In particular, Propositions \ref{prop global R} and \ref{prop global L} yield that at a regular soft edge the density show a square root behavior. As regards the hard edge, the analogue of the regularity condition turns out to be $\liminf_N\lambda_1>0$ and is therefore contained in Assumption~\ref{ass:nu}.
\end{remark}
\begin{remark}
We show in \cite{HHN} that, if $\gamma>1$, then the leftmost edge $\frak a$ is always regular. Namely there exists $\frak c\in D$ which is regular such that $\frak a= g(\frak c)$. In fact, we have $\frak c<0$.
\end{remark}
Before we state our results on the eigenvalues local behavior around the regular edges or the cusp points, we now provide the existence of the appropriate scaling parameters we shall use in the later statements.
\subsection{Consequences of the regularity condition and finite $N$ approximations for the edges and the cusp points}
\label{csq RC}
Recall from Section~\ref{section global} that the Cauchy transform of the
limiting eigenvalue distribution $\mu$ of $\bv M_N$ is defined as the unique
solution $m \in \C_-$ of the fixed-point equation~\eqref{cauchy eq}.
We now consider the probability measure $\mu_N$ induced after replacing $(\gamma, \nu)$ by its finite horizon analogue
$(n/N, \nu_N)$ in this equation (we recall $\nu_N$ was introduced in \eqref{nuN}). Namely, let
$\mu_N$ be the probability measure whose Cauchy transform is defined as
the unique solution $m \in \C_-$ of the fixed-point equation
\[
m = \left( z - \frac nN \int \frac{\lambda}{1 - m \lambda} \nu_N(\d\lambda)
\right)^{-1} \ .
\]
The probability measure $\mu_N$ should be thought of as a deterministic
approximation of the distribution of the eigenvalues of $\bv M_N$ at finite
$N$, and is referred to as the \textbf{deterministic equivalent} of the
spectral measure of $\bv M_N$. The measure $\mu_N$ reads
\[
\mu_N(\d x)=\left(1-\frac{n}{N}\right)^+ \delta_0 + \rho_N(x)\d x,
\]
and one can apply all the results stated in Section \ref{section global} to
describe $\rho_N$, after replacement of $g$ with
\eq
\label{gN}
g_N(z)=\frac 1{z}+\frac nN\int\frac{\lambda}{1-z\lambda}\nu_N(\d \lambda) .
\qe
Recalling $D$ has been introduced in \eqref{D}, the following proposition encodes the essential consequence of the regularity condition.
\begin{proposition}
\label{gN->g}If $\frak c\in D$ satisfies the regularity condition \eqref{RC}, then there exists $\delta>0$ such that $g_N$ is holomorphic on $\{z\in\C:\;|z-\frak c|< \delta\}\subset D$ for every $N$ large enough and converges uniformly towards $g$ there.
\end{proposition}
It is an easy consequence of Montel's theorem. Now, if a sequence of holomorphic function $h_N$ converges uniformly to a (holomorphic) function $h$ on an open disc, then a standard result from complex analysis provides that the $k$th order derivative $h_N^{(k)}$ also converges uniformly to $h^{(k)}$ there, for every $k\geq 1$. Moreover, Hurwitz's theorem states that, if $h$ has a zero $\frak c$ of multiplicity $\ell$ in this disc, then $h_N$ has exactly $\ell$ zeros, including multiplicity, converging towards $\frak c$ as $N\to\infty$.
Thus, as a consequence of the previous proposition, by applying Hurwitz's
theorem to $g_N'$ (and the symmetry $g_N'(\bar z)=g_N'( z)$), it is easy to obtain the following statement.
\begin{proposition}
\label{edge cN}
Assume $\frak c\in D$ satisfies the regularity condition \eqref{RC} and moreover
\[
g'(\frak c)=0,\qquad g''(\frak c)<0,\qquad resp.\quad g''(\frak c)>0.
\]
Then there exists a sequence $(\frak c_N)$, unique up to a finite number of terms, converging to $\frak c$ and such that, for every $N$ large enough, we have $\frak c_N\in D$ and
\[
\lim_{N\to\infty}g_N(\frak c_N)=g(\frak c),\qquad g_N'(\frak c_N)=0,\qquad g_N''(\frak c_N)<0,\qquad resp.\quad g_N''(\frak c_N)>0.
\]
\end{proposition}
Having in mind Propositions \ref{prop global L} and \ref{prop global R}, this proposition thus states that if one considers a regular left (resp. right) soft edge $\frak a$, and thus $\frak a=g(\frak c)$ with $\frak c\in D$ by Remark \ref{reg edge}, then there exists a sequence, unique up to a finite number of terms, of left (resp. right) soft edges $\frak a_N=g_N(\frak c_N)$ for the deterministic equivalent $\mu_N$ converging towards $\frak a$. These soft edges $(\frak a_N)$ are finite $N$ approximations of the edge $\frak a$, while the $\frak c_N$'s are finite $N$ approximations of the preimage $\frak c$.
When dealing with regular cusp points, the situation is slightly more delicate. The reason for this is that if $\frak a=g(\frak c)$ is a regular cusp point, then $\frak c$ is now a zero of multiplicity two for $g'$. By applying Hurwitz's theorem to $g'_N$ as above, one would obtain two sequences of non-necessarily real zeros for $g_N'$ converging towards $\frak c$. It is actually more convenient to apply Hurwitz's theorem to $g_N''$ instead, in order to get the following statement.
\begin{proposition}
\label{cusp cN}
Assume $\frak c\in D$ satisfies the regularity condition \eqref{RC} and moreover
\[
g'(\frak c)=0,\qquad g''(\frak c)=0,\qquad \text{(hence} \ g'''(\frak c)>0
\ \text{by Prop. }\ref{prop global C} \text{).}
\]
Then there exists a sequence $(\frak c_N)$, unique up to a finite number of terms, converging to $\frak c$ and such that, for every $N$ large enough, we have $\frak c_N\in D$ and
\[
\lim_{N\to\infty}g_N(\frak c_N)=g(\frak c),\qquad \lim_{N\to\infty}g_N'(\frak c_N)=0,\qquad g_N''(\frak c_N)=0,\qquad g_N'''(\frak c_N)>0.
\]
\end{proposition}
Notice that Proposition \ref{cusp cN} doesn't guarantee that
$g_N'(\frak c_N)=0$. Hence, a cusp point is not necessarily the limit of cusp points of the deterministic equivalents $\mu_N$. As we shall see in Section~\ref{section Pearcey}, the speed at which $g_N'(\frak c_N)$ goes to zero will actually influence the local behavior around the cusp.
\begin{definition} \label{def cN}
Given a soft left edge, resp. right edge, resp. cusp point $\frak a$ which is regular, and thus $\frak a=g(\frak c)$ with $g'(\frak c)=0$ and $g''(\frak c)<0$, resp. $g''(\frak c)>0$, resp. $g''(\frak c)=0$ and $g'''(\frak c)>0$, the \textbf{sequence associated with $\frak a$} is the sequence $(\frak c_N)$ provided by Propositions \ref{edge cN} and \ref{cusp cN}.
\end{definition}
Equipped with Propositions \ref{edge cN} and \ref{cusp cN}, we are now in position to state the results concerning the local asymptotics.
\subsection{The Airy kernel and Tracy-Widom fluctuations at a soft edge}
\label{section Airy}
Given a function $\K(x,y)$ from $\R\times\R$ to $\R$ satisfying appropriate conditions, one can consider its associated determinantal point process, which is a simple point process on $\R$ having as correlation functions the determinants $\det[\,\K(y_i,y_j)]$. More precisely, it is a probability distribution $\p$ over the configurations $(y_i)$ of real numbers (the particles), namely over discrete subsets of $\R$ which are locally finite, characterized in the following way: For every $k\geq 1$ and any test function $\Phi:\R^k\to\R$,
\[
\mathbb E\left[ \sum_{y_{i_1}\neq \, \cdots \,\neq \, y_{i_k}} \Phi(y_{i_1},\ldots,y_{i_k})\right]=\int_\R \cdots\int_{\R} \Phi(y_1,\ldots,y_k)\det\Big[\K(y_i,y_j)\Big]_{i,j=1}^k\d y_1\cdots \d y_k,
\]
where the sum runs over the $k$-tuples of pairwise distinct particles of the configuration $(y_i)$. Hence the correlation between the particles $y_i$'s is completely encoded by the kernel $\K(x,y)$. In particular, the inclusion-exclusion principle yields a closed formula for the gap probabilities in terms of Fredholm determinants. Namely, for any interval $J\subset \R$, the probability that no particle lies in $J$ reads
\[
\p\Big( (y_i)\cap J =\emptyset\Big)= 1+\sum_{k=1}^\infty\frac{(-1)^k}{k!}\int_J \cdots\int_{J}\det\Big[\K(y_i,y_j)\Big]_{i,j=1}^k\d y_1\cdots \d y_k\, ,
\]
and the latter is the Fredholm determinant $\det(I-\K)_{L^2(J)}$ of the integral operator acting on $L^2(J)$ with kernel $\K(x,y)$, provided it makes sense. We refer to \cite{Hu,JoR} for further information on determinantal point processes.
Consider the \textbf{Airy point process} $\p_{\Ai}$ which is defined as the determinantal point process on $\R$ associated with the Airy kernel
\eq
\label{Kai}
\K_{\Ai}(x,y)=\frac{{\rm Ai}(x){\rm Ai}'(y)-{\rm Ai}(y){\rm Ai}'(x)}{x-y},
\qe
where the Airy function
\[
\mathrm {Ai}(x)=\frac1\pi\int_0^{\infty} \cos\left(\frac{u^3}{3}+ux\right)\d u
\]
is a solution of the differential equation $ f''(x)=xf(x)$.
The configurations $(y_i)$ generated by the Airy point process a.s. involve an infinite number of particles but have a largest particle $y_{\max}$. The distribution of $y_{\max}$ is the \textbf{Tracy-Widom law} (see e.g. \cite[Section 2.2]{JoR}), and its distribution function reads, for every $s\in\R$,
\eq
\label{TW def}
\p_\Ai\big (y_{\max}\leq s \big )=\p_\Ai\Big ((y_{i})\cap (s,+\infty)=\emptyset \Big )=\det(I-\K_{\Ai})_{L^2(s,\infty)}.
\qe
Tracy and Widom \cite{TW1} established the famous representation
\[
\p_\Ai\big (y_{\max}\leq s \big )=\exp\left({-\int_{s}^\infty}(x-s)q(x)^2\d x\right),
\]
where $q$ is the Hastings-McLeod solution of the Painlev\'e II equation, namely the unique solution of $f''(x)=f(x)^3+xf(x)$ with boundary condition $f(x)\sim {\rm Ai}(x)$ as $ x\rightarrow+\infty$.\\
Recalling that $g_N$ has been introduced in \eqref{gN}, we are now in position
to describe the eigenvalues local behavior around regular soft edges. In the
three upcoming theorems, we denote by
$\tilde x_1\leq \cdots \leq \tilde x_{n}$ the ordered eigenvalues of
$\widetilde{\bv M}_N$. We also use the notational convention $\tilde x_0 = 0$
and $\tilde x_{n+1} = + \infty$.
\begin{theorem} \label{th:fluctuations-TW}
Let $\frak a$ be a right edge and assume it is regular. Writing
$\frak a = g(\frak c)$, let
$\phi(N) = \max\{ j : \lambda_j^{-1} > \frak c\}$. Then, almost surely,
\eq
\label{extr R}
\tilde x_{\phi(N)}\xrightarrow [N\to\infty]{} \frak a,\qquad
\liminf_{N\to\infty} \big( \tilde x_{\phi(N)+1} - \frak a\big)>0.
\qe
Moreover, let $(\frak c_N)_N$ be the sequence associated with $\frak a$ as in Definition \ref{def cN}. Set
\eq
\label{cst R}
\frak a_N=g_N(\frak c_N),\qquad \sigma_N=\left(\frac{2}{g_N''(\frak c_N)}\right)^{1/3},
\qe
so that $\frak a_N\to \frak a$, $\frak c_N\to \frak c$, and
$\sigma_N\to (2/g''(\frak c))^{1/3}>0$ as $N\to\infty$. Then, for every
$s\in\R$,
\eq
\label{TW right}
\lim_{N\rightarrow\infty} \p\Big(
N^{2/3}\sigma_N\big( \tilde x_{\phi(N)}-\frak a_N\big)\leq s\Big)
=\p_\Ai\big (y_{\max}\leq s \big ).
\qe
\end{theorem}
\begin{remark} Let us stress that the sequence $\phi(N)$ may be non-trivial even when considering the rightmost edge: As explained in Section \ref{BBP}, it is indeed possible that a certain amount of eigenvalues (possibly infinite) will converge outside of the limiting support. Thus, if we assume the rightmost edge is regular, then Theorem \ref{th:fluctuations-TW} states that there exists a maximal eigenvalue $x_{\phi(N)}$ which actually converge to the rightmost edge and fluctuates according to the Tracy-Widom law.
\end{remark}
Let us comment on the history of this theorem. The Tracy-Widom fluctuations
have been first obtained by Johansson \cite{Jo} for the maximal eigenvalue when
$\bv\Sigma_N$ is the identity. Baik, Ben Arous and P\'ech\'e \cite{BBP} then
proved this still holds true when $\bv\Sigma_N$ is a finite rank perturbation
of the identity, provided the perturbation is small enough. Assuming a
condition which is equivalent to the regularity condition \eqref{RC} and that
the maximal eigenvalue converges towards the rightmost edge, El Karoui
\cite{EK} established the Tracy-Widom fluctuations for the maximal eigenvalue
for general $\bv\Sigma_N$'s assuming $\gamma\leq 1$, and Onatski \cite{On} got
rid of the last restriction. The statement on the existence of extremal
eigenvalues converging to each regular right edge is \cite[Theorem 2]{HHN} and
essentially relies on the exact separation results of Bai and Silverstein
\cite{BS1,BS2}; the definition of the sequence $\phi(N)$ indexing these
extremal eigenvalues relies on these results. Finally, the Tracy-Widom
fluctuations for the extremal eigenvalues associated to any right regular edge
is \cite[Theorem 3-(b)]{HHN}.
We now provide a similar statement for the left soft edges.
\begin{theorem}
\label{th:fluctuations-TW2}
Let $\frak a$ be a left edge of the bulk.
If $\gamma > 1$ and $\frak a$ is the leftmost edge of the bulk, set
$\phi(N) = n-N+1$. Otherwise, assume that $\frak a>0$ is regular, write
$\frak a = g(\frak c)$ and set
$\phi(N) = \min \{ \lambda_j : \lambda_j^{-1} < \frak c \}$. Then, almost
surely,
\eq
\label{extr L}
\tilde x_{\phi(N)}\xrightarrow [N\to\infty]{} \frak a,\qquad
\liminf_{N\to\infty} \big( \frak a-\tilde x_{\phi(N)-1} \big)>0.
\qe
Moreover, let $(\frak c_N)_N$ be the sequence associated with $\frak a$ as in
Definition~\ref{def cN}. Set
\eq
\label{cst L}
\frak a_N=g_N(\frak c_N),\qquad \sigma_N=\left(\frac{2}{-g_N''(\frak c_N)}\right)^{1/3},
\qe
so that $\frak a_N\to \frak a$, $\frak c_N\to \frak c$, and
$\sigma_N\to (-2/g''(\frak c))^{1/3}>0$ as $N\to\infty$. Then, for every
$s\in\R$,
\eq
\label{TW left}
\lim_{N\rightarrow\infty} \p\Big(
N^{2/3}\sigma_N\big( \frak a_N-\tilde x_{\phi(N)}\big)\leq s\Big)
=\p_\Ai\big (y_{\max}\leq s \big ).
\qe
\end{theorem}
Prior to this result, which is a combination of Theorem 2 and Theorem 3-(a) from \cite{HHN}, the Tracy-Widom fluctuations for the smallest random eigenvalue when $\bv\Sigma_N$ is the identity has been obtained by Borodin and Forrester \cite{BF}.
Let us also mention that when $\nu$ is the sum of two Dirac masses, a local uniform convergence to the Airy kernel (which is a weaker statement than the Tracy-Widom fluctuations) at every (regular) right and soft left edges follows from \cite{LW}, see also \cite{Mo1,Mo2}.
Finally, we state our last result, concerning the asymptotic independence of the Tracy-Widom fluctuations at a finite number of regular soft edges. For a more precise statement, we refer to \cite[Theorem 4]{HHN}.
\begin{theorem}
\label{th:independence}
Let $(\frak a_j)_{j\in J}$ be a finite collection of soft edges, and assume all
these edges are regular. For each $j\in J$, consider the rescaled eigenvalue
$N^{2/3}\sigma_{N,j}(\tilde x_{\phi_j(N)}-\frak a_{N,j})$ associated with the soft
edge $\frak a_j$ provided by \eqref{extr R}--\eqref{cst R} and \eqref{extr
L}--\eqref{cst L}. Then the family of random variables
$\{N^{2/3}\sigma_{N,j}(\tilde x_{\phi_j(N)}-\frak a_{N,j})\}_{j\in J}$ becomes
asymptotically independent as $N\to\infty$.
\end{theorem}
The asymptotic independence has been previously established for the smallest and largest eigenvalues when $\bv\Sigma_N$ is the identity by Basor, Chen and Zhang \cite{BCZ}.
\begin{remark}
\label{KY univ} The results presented in this survey rely on the fact that the entries of $\bv X_N$ are complex Gaussian random variables,
a key assumption in order to take advantage of the determinantal structure of the eigenvalues of the model under study. A recent work \cite{knowles-yin-2014-preprint} by
Knowles and Yin enables to transfer the results of Theorems \ref{th:fluctuations-TW}, \ref{th:fluctuations-TW2} and \ref{th:independence}
to the case of complex, but not necessarily Gaussian, random variables. Indeed, by combining the local convergence to the limiting distribution established in \cite{knowles-yin-2014-preprint} together with Theorems \ref{th:fluctuations-TW}, \ref{th:fluctuations-TW2} and
\ref{th:independence}, one obtains Tracy-Widom fluctuations and asymptotic independence in this more general setting, provided that the entries of matrix $\bv{X}_N$ fulfill some moment condition. Let us stress that the case of
real Gaussian random variables (except the largest one covered in \cite{lee-schnelli-preprint}), of important interest in statistical applications, remains open.
\end{remark}
We now turn to the hard edge and the Bessel point process.
\subsection{The Bessel point process at the hard edge}
\label{section Bessel}
The \textbf{Bessel point process} $\p_\Be^{(\alpha)}$ of parameter $\alpha\in\mathbb Z$ is the determinantal point process on $\R_+$ associated with the kernel
\eq
\label{Bessel kernel}
\K_{\Be}^{(\alpha)}(x,y)=\frac{\sqrt{y}\, J_\alpha(\sqrt{x})J_\alpha'(\sqrt{y})-\sqrt{x}\, J_\alpha'(\sqrt{x})J_\alpha(\sqrt{y})}{2(x-y)} ,
\qe
where the Bessel function of the first kind $J_\alpha$ with parameter $\alpha$ is defined for $x\geq0$ by
\eq
\label{series rep Bessel}
J_\alpha(x) = \left( \frac x2\right)^\alpha \sum_{n=0}^\infty \frac{(-1)^n}{n!\, \Gamma(n+\alpha+1)} \left( \frac x2\right)^{2n}
\qe
and satisfies the differential equation $x^2 f''(x)+xf'(x)+(x^2-\alpha^2)f(x)=0$.
The configurations $(y_i)$ generated by the Bessel point process a.s. have an infinite number of particles $y_i$ but have a smallest particle $y_{\min}$. The law of $y_{\min}$ is characterized, for every $s>0$, by
\eq
\label{hard TW def}
\p_\Be^{(\alpha)}\big (y_{\min}\geq s \big )=\p_\Be^{(\alpha)}\Big ((y_{i})\cap (0,s)=\emptyset \Big )=\det(I-\K_{\Be}^{(\alpha)})_{L^2(0,s)}.
\qe
When $\alpha=0$, this reduces to an exponential law of parameter $1$, namely $\p_\Be^{(0)}\big (y_{\min}\geq s \big )=e^{-s}$, as observed by Edelman \cite{Ed}. In the general case, Tracy and Widom obtained the representation \cite{TW2},
\[
\p_\Be^{(\alpha)}\big (y_{\min}\geq s \big )=\exp\left(-\frac{1}{4}\int_0^s(\log s -\log x) q(x)^2\d x\right),
\]
where $q$ is the solution of a differential equation which is reducible to a particular case of the Painlev\'e V equation (involving $\alpha$ in its parameters) and boundary condition $q(x)\sim J_\alpha(\sqrt x)$ as $x\to 0_+$.
Recalling the $\lambda_j$'s are the eigenvalues of $\bv\Sigma_N$ and their distributional limit $\nu$ has a compact support in $(0,\infty)$, we now provide our statement concerning the eigenvalues local behavior around the hard edge.
\begin{theorem}
\label{th Bessel}
Assume that $n=N+\alpha$ with $\alpha\in\mathbb Z$ independent of $N$ and set
\[
\sigma_N=-2g_N''(\infty)=\frac{4}{N}\sum_{j={1}}^n\frac{1}{\lambda_j} \ ,\qquad \zeta_N=-\frac{4}{3}g'''_N(\infty)=\frac{8}{N}\sum_{j={1}}^n\frac{1}{\lambda_j^2}\ .
\]
Thus $\sigma_N\to 4\int\lambda^{-1}\nu(\d \lambda)>0$ and $\zeta_N\to 8\int\lambda^{-2}\nu(\d \lambda)>0$ as $N\to\infty$.
Let $x_{\min}$ be the smallest random eigenvalue of $\bv M_N$. Then, for every $s>0$, we have
\eq
\label{Bessel first}
\lim_{N\to\infty}\p\Big(N^2\sigma_N\,x_{\min}\geq s\Big)=\p_\Be^{(\alpha)}\big (y_{\min}\geq s \big ).
\qe
Furthermore, we have the expansion as $N\to\infty$,
\eq
\label{Bessel next}
\p\Big(N^2\sigma_N\,x_{\min}\geq s\Big)=\p_\Be^{(\alpha)}\big (y_{\min}\geq s \big )-\frac1N\left(\frac{\alpha \zeta_N}{\sigma_N^2}\right) \, s\frac{\d }{\d s}\p_\Be^{(\alpha)}\big (y_{\min}\geq s \big )+O\left(\frac{1}{N^2}\right)\, .
\qe
\end{theorem}
When $\bv\Sigma_N$ is the identity, the convergence \eqref{Bessel first} has been established by Forrester \cite{F}. As for the next order term \eqref{Bessel next}, it has been obtained when $\bv\Sigma_N$ is the identity by Perret and Schehr \cite{PS} and Bornemann \cite{bornemann-2014-note}, motivated by a question raised by Edelman, Guionnet and P\'ech\'e in \cite{EGP}. The statement \eqref{Bessel first} has been first obtained in \cite{HHN}, while the stronger statement \eqref{Bessel next} is \cite[Theorem 6]{HHN2}.
\subsection{Application to condition numbers}
\label{section condition}
In this subsection, we study the fluctuations of the ratio
$$
\kappa_N=\frac {x_{\max}}{x_{\min}}
$$
of the largest to the smallest random eigenvalue of $\bv M_N$. Notice that
if $n \geq N$, then $\kappa_N$ is the condition number of $\bv M_N$ while if
$n \leq N$, then $\kappa_N$ is the condition number of $\widetilde{\bv M}_N$.
The condition number is a central object of study in numerical linear
algebra~\cite{von-neumann-goldstine-47,von-neumann-goldstine-51}.
Using our previous results, we can obtain an asymptotic description for
$\kappa_N$.
Let us emphasize that the leftmost edge $\frak a$ of the support of $\rho$ is positive
if and only if $\gamma \neq 1$, see \cite[Proposition 3]{HHN2}.
\begin{proposition} \label{prop:condition-number}
Assume $\gamma \neq 1$. Denote by $\frak a$ the leftmost edge and by $\frak b$
the rightmost one. Assume that $\frak a,\frak b$ are regular, $x_{\min}\to\frak a$ and $x_{\max}\to \frak b$
a.s. as $N\to\infty$ (that $\frak a $ is regular and $x_{\min}\to\frak a$ a.s. is always true when $\gamma >1$). Write $\frak a=g(\frak c)$, $\frak b=g(\frak d)$, consider the sequences $(\frak c_N)$ and $(\frak d_N)$ associated with $\frak c$ and $\frak d$ respectively (see Definition \ref{def cN}) and set
\[
\frak a_N=g_N(\frak c_N)\ ,\quad \sigma=\left(\frac{2}{-g''(\frak c)}\right)^{1/3},\quad \qquad
\frak b_N=g_N(\frak d_N)\ ,\quad \delta=\left(\frac{2}{g''(\frak d)}\right)^{1/3}\ .
\]
Then,
$$
\kappa_N \xrightarrow[N\to \infty]{a.s.} \frac{\frak b}{\frak a}\qquad \textrm{and}\qquad
N^{2/3} \left(
\kappa_N - \frac{\frak b_N}{\frak a_N}
\right) \xrightarrow[N\to \infty]{\mathcal D} \frac {X} {\delta\frak a} + \frac {\frak b Y}{\sigma \frak a^2}
$$
where $\xrightarrow[]{\mathcal D}$ stands for the convergence in distribution
and where $X$ and $Y$ are two independent random variables with the
Tracy-Widom distribution.
\end{proposition}
We now handle the case where $\gamma=1$.
\begin{proposition}\label{prop:condition-number-gamma-1}
Assume $n=N+\alpha$, where $\alpha\in \mathbb{Z}$ is independent of $N$, and moreover $x_{\max}\to \frak b$ a.s. for some $\frak b>0$. Then,
$$
\frac 1{N^2} \kappa_N \xrightarrow[N\to \infty]{\mathcal D} \frac {4\frak b } {X} \left(\int \lambda^{-1}\nu(\d \lambda)\right)
$$
where $\p(X\geq s)=\p_\Be^{(\alpha)}\big (y_{\min}\geq s \big )$ for every $s>0$.
\end{proposition}
\begin{remark}
Interestingly, in the square case where $\gamma=1$, the fluctuations of the
largest eigenvalue $x_{\max}$ have no influence on the
fluctuations of $\kappa_N$ as these are imposed by the limiting distribution
of $x_{\min}$ and the a.s. limit $\frak b$ of $x_{\max}$.
\end{remark}
Finally, we turn to the eigenvalues local behavior near a cusp point.
\subsection{The Pearcey kernel at a cusp point}
\label{section Pearcey}
Given any $\tau\in\R$, following \cite{TW3} we introduce the Pearcey-like integral functions
\[
\phi(x)=\frac{1}{2i\pi}\oint_{\Sigma} e^{xz-\tau z^2/2+z^4/4} \d z,\qquad \psi(y)=\frac{1}{2i\pi}\int_{-i\infty}^{i\infty} e^{-yw+\tau w^2/2-w^4/4} \d w,
\]
where the contour $\Sigma$ consists in two rays going from $\pm e^{i\pi/4}\infty$ to zero, and two rays going from zero to $\pm e^{-i\pi/4}$. They satisfy the respective differential equations
\[
\phi'''(x)-\tau\phi'(x)+x\phi(x)=0, \qquad \psi'''(y)-\tau\psi'(y)-y\psi(y)=0.
\]
The \textbf{Pearcey point process} $\p_\Pe^{(\tau)}$ is the determinantal point process associated with the Pearcey kernel
\eq
\label{KPe}
\K_\Pe^{(\tau)}(x,y)=\frac{\phi''(x)\psi(y)-\phi'(x)\psi'(y)+\phi(x)\psi''(y) -\tau\psi(x)\psi(y)}{x-y}.
\qe
This process has been first introduced by Br\'ezin and Hikami \cite{BH1,BH2} when $\tau=0$, and subsequent generalizations have been considered by Tracy and Widom \cite{TW3}.
The configurations $(y_i)$ generated by the Pearcey point process are a.s. infinite and do not have a largest nor smallest particle. With this respect, the quantities of interest here are the gap probabilities of the Pearcey point process, defined for every $0<s<t$ by
\eq
\label{def gap Pearcey}
\p_{\Pe}^{(\tau)}\Big( (y_i)\cap [s,t]=\emptyset\Big)=\det(I-\K_{\Pe}^{(\tau)})_{L^2(s,t)} .
\qe
Seen as a function of $s$, $t$ and $\tau$, the $\log$ of the righthand side of \eqref{def gap Pearcey} is know to satisfy a system of PDEs, see \cite{TW3, BC12, ACvM12}.
In \cite[Theorem 5]{HHN2}, we prove the following statement.
\begin{theorem} \label{main cor}
Let $\frak a = g(\frak c)$ be a cusp point such that $\frak c \in D$, and
assume it is regular. Let $(\frak c_N)$ be the sequence associated with
$\frak a$ as in Definition~\ref{def cN}. Assume moreover
the following decay assumption holds true: There exists $\kappa\in\R$ such that
\eq
\label{speed asump}
\sqrt N\, g_N'(\frak c_N)\xrightarrow[N\to\infty]{} \kappa \ .
\qe
We set
\eq
\label{constants}
\frak a_N=g_N(\frak c_N),\qquad \sigma_N=\left(\frac{6}{g_N^{(3)}(\frak c_N)}\right)^{1/4},\qquad \tau=-\kappa\left(\frac{6}{ g^{(3)}(\frak c)}\right)^{1/2},
\qe
so that $\frak a_N\to\frak a$ and $\sigma_N\to\left(6/g'''(\frak c)\right)^{1/4}>0$ as $N\to\infty$. Then, for every $s>0$, we have
\eq
\label{gap conv}
\lim_{N\to\infty}\p\Big( \big(N^{3/4}\sigma_N(x_i-\frak a_N)\big)\cap[-s,s]=\emptyset\Big) = \p_{\Pe}^{(\tau)}\Big( (y_i)\cap [-s,s]=\emptyset\Big),
\qe
where the $x_i$'s are the random eigenvalues of $\bv M_N$.
\end{theorem}
\noindent This result has been obtained by Mo when $\bs\Sigma_N$ has exactly two distinct eigenvalues \cite{Mo2}.
As advocated in Section \ref{csq RC}, the precise decay for $g_N'(\frak c_N)\to 0$ does influence the eigenvalues local behavior near a cusp (see Proposition \ref{cusp cN} and the discussion below). Our assumption \eqref{speed asump} covers the general case where $\sqrt N g_N'(\frak c_N)\to0$, and hence the limiting kernel is $\K_\Pe^{(0)}(x,y)$ introduced by Br\'ezin and Hikami, and the limiting regime where $\sqrt N g_N'(\frak c_N)$ has a limit as well.
\begin{remark}{\bf (erosion of a valley)}
In the case where this limit $\kappa$ in \eqref{speed asump} is positive, the deterministic equivalent measure $\mu_N$ will not feature a cusp but rather a valley that will become deeper
as $N\to\infty$, see the thin curve in Figure \ref{fig:zoomcusp}. The density of $\mu_N$ will always be positive near the cusp and the condition
$$
g'_N(\frak c_N) \sim \frac{\kappa}{\sqrt{N}}
$$
should be thought of as a speed condition of the erosion of the valley.
\end{remark}
\begin{remark}{\bf (moving cliffs)}
In the case where $\kappa<0$ in \eqref{speed asump}, $g'_N(\frak c_N)$ is
always negative for $N$ large enough. In particular, there exists a small
$N$-neighborhood of $\frak c_N$ whose image by $g_N$ is outside the support of
$\mu_N$: There is a small hole in the support of $\mu_N$ but the two connected
components move towards one another (moving cliffs), see the dotted curve in
Figure \ref{fig:zoomcusp}. In this case, the condition
$$
g'_N(\frak c_N) \sim \frac{\kappa}{\sqrt{N}}
$$ can also be interpreted as a speed condition at which the
cliffs approach one another.
\end{remark}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{cvgcusp.pdf}
\caption{Zoom of the density of $\mu_N$ near the cusp point
$\frak a$. The thick curve is the density of $\mu$ in the framework
of Figure~\ref{fig:cusp}. The thin curve (resp.~the dotted curve) is the
density of $\mu_N$ when $\sqrt{N} g'_N(\frak c_N) > 0$
(resp.~$\sqrt{N} g'_N(\frak c_N) < 0$).}
\label{fig:zoomcusp}
\end{figure}
\begin{remark}{\bf (slow decay)}
The slow decay setting where $$\sqrt N g_N'(\frak c_N)\to \pm\infty$$ is not covered by our results. In this case, we do not expect the Pearcey point process to arise anymore, and refer to Section \ref{OP} for further discussion.
\end{remark}
\section{Sketches of the proofs}\label{sec:proofs}
In this section, we provide an outline for the proofs of the results presented in Section \ref{section local}.
\subsection{The random eigenvalues of $\bv M_N$ form a determinantal point process }
The key input on which all our proofs are based on, is that when the elements
of $\bv X_N$ are complex Gaussian (Assumption~\ref{ass:gauss}), the configuration
of the random eigenvalues $x_i$'s of $\bv M_N$ form a determinantal point
process with an explicit kernel. More precisely Baik, Ben Arous and P\'ech\'e
provided in \cite{BBP} a formula for that kernel, to which they give credit to
Johansson. It is given by the following double complex integral
\eq
\label{KN}
\K_N(x,y)=\frac{ N}{(2i\pi)^2}\oint_{\Gamma}\d z\oint_{\Theta} \d w\,\frac{1}{w-z} e^{- Nx(z-\frak q) +Ny(w-\frak q)}\left(\frac{z}{w}\right)^{ N}\prod_{j=1}^n\left(\frac{w-\lambda_j^{-1}}{z-\lambda_j^{-1}}\right),
\qe
where the $\frak q\in\R$ is a free parameter (see \cite[Remark 4.3]{HHN}) and we recall the $\lambda_j$'s are the eigenvalues of $\bv\Sigma_N$.
The contours $\Gamma$ and $\Theta$ are disjoint and closed, both oriented counterclockwise, such that $\Gamma$ encloses all the $\lambda_j^{-1}$'s whereas $\Theta$ encloses the origin.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\linewidth]{paths.pdf}
\caption{The contours of integration}
\label{fig:paths}
\end{figure}
\begin{remark} The main ingredient to obtain this determinantal representation is the Harish-Chandra-Itzykson-Zuber integral formula, which allows to write a particular integral over the unitary group in terms of determinants, see \cite[Section 2.1]{BBP}. The analogue of this integral formula does not seem to exist for correlated Wishart matrices with real or quaternionic entries, and thus the determinantal structure seems only available in the complex setting.
\end{remark}
If we consider a random configuration of the form
\[
\big(N^\beta\sigma_N(x_i-\frak a_N)\big)\ ,
\]
then a change of variables yields it is a determinantal point process with kernel
\eq
\label{res KN}
\frac{1}{N^\beta\sigma_N}\K_N\left( \frak a_N+\frac{x}{N^\beta\sigma_N}, \frak a_N+\frac{y}{N^\beta\sigma_N}\right) ,
\qe
where $\K_N$ is as in \eqref{KN}. Hence, the study of the eigenvalues local behavior boils down to the asymptotic analysis as $N\to\infty$ of kernels of the form \eqref{res KN} with different choices for the scaling parameters $\beta,\sigma_N,\frak a_N$.
\subsection{Modes of convergence}
In order to prove the convergence \eqref{Bessel first} at the hard edge, it is enough to establish a local uniform convergence on $\R_+\times\R_+$ for the kernel \eqref{res KN} to the Bessel kernel $\K_{\Be}^{(\alpha)}(x,y)$, after choosing appropriately the scaling parameters. Similarly, the local uniform convergence on $\R\times\R$ to the Pearcey kernel $\K_{\Pe}^{(\tau)}(x,y)$ yields the convergence \eqref{gap conv} for the gap probabilities around a cusp. The convergences \eqref{TW right} and \eqref{TW left} to the Tracy-Widom law however require a stronger mode of convergence (such as the trace-class norm convergence, or the Hilbert-Schmidt norm plus trace convergence, for the associated operators acting on $L^2(s,\infty)$, for every $s\in\R$; we refer to \cite[Section 4.2]{HHN} for further information). This essentially amounts to obtain a local uniform convergence on $(s,+\infty)\times (s,+\infty)$ plus tail estimates for $\K_N(x,y)$.
From now, we shall disregard these convergence issues and provide heuristics on why the Airy kernel, the Bessel kernel and the Pearcey kernel
should appear in different scaling limits.
\subsection{Towards the Airy kernel}
Here we provide an heuristic for the convergence to the Airy kernel. The gap to be filled in order to make this sketch of a proof mathematically rigorous can be found in \cite{HHN}; this heuristic may actually serve as a roadmap for the quite lengthy and technical proof we provided there.
Since we are dealing with contours integrals, it is more convenient to use the following alternative representation for the Airy kernel \eqref{Kai},
\eq
\label{Airy cont}
\K_\Ai(x,y)=
\frac{1}{(2i\pi)^2}\int_{\Xi}\d z\int_{\Xi'} \d w \,\frac{1}{w-z} e^{-xz+yw+z^3/3-w^3/3},
\qe
which is based on the contour integral formula for the Airy function (see e.g.
the proof of \cite[Lemma 4.15]{HHN}). The contours $\Xi$ and $\Xi'$ are
disjoint and unbounded contours, and $\Xi$ goes from $e^{i\pi/3}\infty$ to
$e^{-i\pi/3}\infty$ whereas $\Xi'$ goes from $e^{-2i\pi/3}\infty$ to
$e^{2i\pi/3}\infty$, as shown on Figure~\ref{fig:paths-Airy}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\linewidth]{airy.pdf}
\caption{The paths of integration for the Airy kernel}
\label{fig:paths-Airy}
\end{figure}
Consider the scaling parameters associated with a soft edge provided in Theorem \ref{th:fluctuations-TW}; we thus focus on the right edge setting, but the situation for a left edge is similar. More precisely, by using the formula \eqref{KN} where we take $\frak q=\frak c_N$, we investigate
\begin{align}
\label{KN airy int}
& \frac{1}{N^{2/3}\sigma_N}\K_N\left(\frak a_N+ \frac{x}{N^{2/3}\sigma_N},\frak a_N+\frac{y}{N^{2/3}\sigma_N}\right)\\
& \quad = \quad\frac{N^{1/3}}{(2i\pi)^2\sigma_N}\oint_\Gamma\d z\oint_\Theta \d w\, \frac{1}{w-z} e^{-N^{1/3}x(z-\frak c_N)/\sigma_N+N^{1/3}y(w-\frak c_N)/\sigma_N} e^{Nf_N(z)-Nf_N(w)} \nonumber,
\end{align}
where we introduced the map
\eq
\label{fN}
f_N(z)=-\frak a_{N}(z-\frak c_N)+\log(z)-\frac{1}{N}\sum_{j=1}^n\log(1-\lambda_j z).
\qe
After performing the change of variables $z\mapsto \frak c_N+\sigma_N z/N^{1/3}$ and $w\mapsto \frak c_N+\sigma_N w/N^{1/3}$, the right-hand side of \eqref{KN airy int} becomes
\begin{multline}
\label{int change airy}
\frac{1}{(2i\pi)^2}\oint_{\phi_N(\Gamma)}\d z\oint_{\phi_N(\Theta)} \d w\, \frac{1}{w-z} e^{-xz+yw+N f_N\big(\frak c_N + \sigma_N\frac{z}{N^{1/3}}\big)-N f_N\big(\frak c_N + \sigma_N\frac{w}{N^{1/3}}\big)},
\end{multline}
where we set for convenience $\phi_N(z)= N^{1/3}(z-\frak c_N)/\sigma_N$.
Next, recalling $g_N$ was introduced in \eqref{gN}, the crucial observation that
\eq
\label{fgN0}
f_N'(z)= g_N(z)-\frak a_N
\qe
allows to infer on the local behavior of $f_N$ around $\frak c_N$. More precisely, since by definition of the scaling parameters we have
\[
g_N(\frak c_N)=\frak a_N,\qquad g_N'(\frak c_N)=0,\qquad g_N''(\frak c_N)\xrightarrow[N\to\infty]{}g''(\frak c)>0,
\]
a Taylor expansion for $f_N$ around $\frak c_N$ yields the approximation
\eq
\label{preapprox airy}
N\Big (f_N(\frak c_N + \sigma_N\frac{z}{N^{1/3}})-f_N(\frak c_N)\Big) \; \simeq \; \frac{1}{3}z^3\ ,
\qe
where the constant $1/3$ comes from the definition of $\sigma_N$, see \eqref{cst R}. In conclusion, after plugging \eqref{preapprox airy} into \eqref{int change airy}, we obtain the approximation
\begin{multline}
\label{KN airy int 2}
\frac{1}{N^{2/3}\sigma_N}\K_N\left(\frak a_N+ \frac{x}{N^{2/3}\sigma_N},\frak a_N+\frac{y}{N^{2/3}\sigma_N}\right)\\
\simeq \frac{1}{(2i\pi)^2}\oint_{\phi_N(\Gamma)}\d z\oint_{\phi_N(\Theta)} \d w\, \frac{1}{w-z} e^{-xz+yw+ z^3/3-w^3/3},
\end{multline}
and we can almost read the Airy kernel \eqref{Airy cont}, up to contour deformations.
To frame the previous heuristic into a rigorous mathematical setting, a few technical points should be addressed, since of course the approximation \eqref{preapprox airy} is only valid when $|z|$ is not too large and the contours appearing in the Airy kernel are unbounded. With this respect, the standard move is to split the contours $\Gamma$ and $\Theta$ into different parts and then to deform each part in an appropriate way.
In a neighborhood of $\frak c_N$, after simple transformations, one chooses $\Gamma$ and $\Theta$ to match with the contours of the Airy kernel there, and then justify rigorously the approximation \eqref{KN airy int 2} after restriction of $z,w$ to that neighborhood. This can be done by quantifying the approximation \eqref{preapprox airy} and then performing tedious but rather simple computations.
Then, outside of this neighborhood, one proves that the remaining of the integrals don't contribute in the large $N$ limit. In the present setting of a general matrix $\bv \Sigma_N$, this is the hard part of the proof. To do so, one establishes the existence of admissible deformations for the contours $\Gamma$ and $\Theta$ so that they complete the Airy contours truncated on a neighborhood of $\frak c_N$, and where the contribution coming from the term $\exp\{Nf_N(z)-Nf_N(w)\}$ brings an exponential decay on the remaining part. This can be done by looking for the so-called steepest descent/ascent contours (i.e. contours on which $\re f_N$ is decreasing/increasing), and this was the strategy used by Baik, Ben Arous, P\'ech\'e \cite{BBP} and El Karoui \cite{EK} when dealing with the rightmost edge. When considering any right or left soft edge, following this strategy requires to consider many sub-cases and to perform again most of the computations in several case. In \cite{HHN}, we instead developed a unified (abstract) method to provide the existence of appropriate contours by mean of the maximum principle for subharmonic functions.
For the reader interested in having a look at the proofs of \cite{HHN}, let us mention it turns out it is more convenient to work at a scale where the contours $\Gamma,\Theta$ live in a bounded domain, and this is the reason why we did not performed there the changes of variables $z\mapsto \frak c_N+\sigma_N z/N^{1/3}$ and $w\mapsto \frak c_N+\sigma_N w/N^{1/3}$ as we did in the present heuristic.\\
\subsection{Towards the Pearcey kernel}
Now we turn to the heuristics for Pearcey kernel and refer to \cite{HHN2} for a rigorous proof. The setting is essentially the same as in the Airy case, except that now $\frak c_N$ is a simple zero for $g_N''$ instead of $g_N'$, and $g_N'(\frak c_N)\to 0$, which entails a different behavior for the map $f_N$ near $\frak c_N$.
We start with the alternative representation for the Pearcey kernel \eqref{KPe},
\eq
\label{Pearcey cont}
\K_{\Pe}^{(\tau)}(x,y)=
\frac{1}{(2i\pi)^2}\int_{\Xi}\d z\int_{-i\infty}^{\, i\infty} \d w \,\frac{1}{w-z} e^{-xz-\frac{\tau z^2}2+\frac{z^4}4+yw+\frac{\tau w^2}2-\frac{w^4}4},
\qe
where the contour $\Xi$ is disjoint to the imaginary axis, and has two components. The first part goes from $e^{i\pi/4}\infty$ to $e^{-i\pi/4}\infty$, whereas the other part goes from $e^{-3i\pi/4}\infty$ to $e^{3i\pi/4}\infty$. See \cite{TW3} for a proof (and also \cite{BH1} when $\tau=0$). Notice also the symmetry $\K_{\Pe}^{(\tau)}(x,y)=\K_{\Pe}^{(\tau)}(-x,-y)$ which follows from the change of variables $z,w\mapsto -z,-w$.
Consider the scaling parameters associated with a regular cusp point provided in Theorem~\ref{main cor}. By using the formula \eqref{KN} where we choose $\frak q=\frak c_N$, we now consider
\begin{align}
\label{KN Pearcey int}
& \frac{1}{N^{3/4}\sigma_N}\K_N\left(\frak a_N+ \frac{x}{N^{3/4}\sigma_N},\frak a_N+\frac{y}{N^{3/4}\sigma_N}\right)\\
& \quad = \quad\frac{N^{1/4}}{(2i\pi)^2\sigma_N}\oint_\Gamma\d z\oint_\Theta \d w\, \frac{1}{w-z} e^{-N^{1/4}x\frac{(z-\frak c_N)}{\sigma_N}+N^{1/4}y\frac{(w-\frak c_N)}{\sigma_N}} e^{Nf_N(z)-Nf_N(w)} \nonumber,
\end{align} where the map $f_N$ is the same as in \eqref{fN}. After the change of variables $z\mapsto \frak c_N+\sigma_N z/N^{1/4}$ and $w\mapsto \frak c_N+\sigma_N w/N^{1/4}$, the right-hand side of \eqref{KN Pearcey int} reads
\begin{multline}
\label{int change Pearcey}
\frac{1}{(2i\pi)^2}\oint_{\phi_N(\Gamma)}\d z\oint_{\phi_N(\Theta)} \d w\, \frac{1}{w-z} e^{-xz+yw+N f_N\big(\frak c_N + \sigma_N\frac{z}{N^{1/4}}\big)-N f_N\big(\frak c_N + \sigma_N\frac{w}{N^{1/4}}\big)},
\end{multline}
where we introduced for convenience $\phi_N(z)= N^{1/4}(z-\frak c_N)/\sigma_N$.
In this setting, the definition of the scaling parameters yields
\[
g_N(\frak c_N)=\frak a_N,\qquad \sqrt N g'_N(\frak c_N)\xrightarrow[N\to\infty]{}\kappa, \qquad g_N''(\frak c_N)=0,\qquad g_N'''(\frak c_N)\xrightarrow[N\to\infty]{}g'''(\frak c)>0.
\]
Recalling the identity \eqref{fgN0} and the definition \eqref{constants} of $\tau$, a Taylor expansion around $\frak c_N$ then yields the approximation
\eq
\label{preapprox Pearcey}
N\Big (f_N(\frak c_N + \sigma_N\frac{z}{N^{1/4}})-f_N(\frak c_N)\Big) \; \simeq \; -\frac{\tau}{2}z^2+\frac{1}{4}z^4\ ,
\qe
where the constant $1/4$ comes from the definition of $\sigma_N$, see \eqref{constants}. Thus, by plugging \eqref{preapprox Pearcey} into \eqref{int change Pearcey}, we obtain the approximation
\begin{multline}
\label{KN Pearcey int 2}
\frac{1}{N^{3/4}\sigma_N}\K_N\left(\frak a_N+ \frac{x}{N^{3/4}\sigma_N},\frak a_N+\frac{y}{N^{3/4}\sigma_N}\right)\\
\simeq \frac{1}{(2i\pi)^2}\oint_{\phi_N(\Gamma)}\d z\oint_{\phi_N(\Theta)} \d w\, \frac{1}{w-z} e^{-xz-\frac{\tau z^2}2+\frac{z^4}4+yw+\frac{\tau w^2}2- \frac{w^4}4},
\end{multline}
and we can almost see the Pearcey kernel \eqref{Pearcey cont}, up to contour deformations.
As to make this approximation rigorous, the method is the same as for the Airy kernel. Let us mention that the abstract argument we mention previously for the existence of appropriate contour deformations also applies in this setting.
\subsection{Towards the Bessel kernel}
Finally, we provide heuristics for the appearance of the Bessel kernel and refer to \cite{HHN2} for a rigorous proof. The main input here is, according to Section \ref{section global}, the critical point $\frak c$ associated with the hard edge is now located at infinity.
The first step is to write the Bessel kernel \eqref{Bessel kernel} as the double contour integral,
\eq
\label{Bessel cont}
\K_{\Be}^{(\alpha)}(x,y)=
\frac{1}{(2i\pi)^2}\left(\frac{y}{x}\right)^{\alpha/2}\oint_{|z|=\, r} \frac{\d z}{z}\oint _{|w|=\, R}\frac{\d w}{w} \,\frac{1}{z-w}\left(\frac{z}{w}\right)^\alpha e^{-\frac xz+\frac z4+\frac yw-\frac w4},
\qe
where $0<r<R$, and which is provided in \cite[Lemma 6.2]{HHN}. The contours of integration are circles oriented counterclockwise. Let us stress this formula is only available when $\alpha\in\mathbb Z$, since otherwise the term $(z/w)^\alpha$ in the integrand would not make sense on the whole of the integration contours.
Setting $\sigma_N$ as in Theorem \ref{th Bessel} and using the formula \eqref{KN} where we choose $\frak q=0$, we now consider
\begin{align}
\label{KN Bessel int}
& \frac{1}{N^{2}\sigma_N}\K_N\left(\frac{x}{N^{2}\sigma_N},\frac{y}{N^{2}\sigma_N}\right)\\
& \quad = \; \frac{1}{(2i\pi)^2N\sigma_N}\oint_{\Gamma} \d z \oint_{\Theta} \d w\,\frac{1}{w-z} \left(\frac{z}{w}\right)^{N}e^{- \frac{zx}{N\sigma_N}+\frac{wy}{N\sigma_N}}\prod_{j=1}^n\frac{w-\lambda_j^{-1}}{z-\lambda_j^{-1}} \nonumber.
\end{align}
Having in mind the critical point is located at infinity, we perform the change of variables $z\mapsto N\sigma_N/z$ and $w\mapsto N\sigma_N/w$, so that the right-hand side of \eqref{KN Bessel int} reads
\begin{multline}
\label{int change Bessel}
\frac{1}{(2i\pi)^2}\oint_{\phi_N(\Gamma)}\frac{\d z}z
\oint_{\phi_N(\Theta)} \frac{\d w}w\, \frac{1}{z-w}
\left(\frac{z}{w}\right)^\alpha
e^{-\frac xz+\frac yw-N G_N(z)+ NG_N(w)},
\end{multline}
where we introduced the maps
\[
G_N(z)=\frac{1}{N}\sum_{j=1}^n\log\left(\frac{z}{N\sigma_N} -\lambda_j\right)
\]
and $\phi_N(z)= N\sigma_N/z$. We emphasize that, during the previous step, we used that $n=N+\alpha$ and witnessed a cancellation leading to the term $(z/w)^\alpha$, which does not depend on $N$.
Now, a Taylor expansion of $G_N$ around zero yields the approximation
\eq
\label{preapprox Bessel}
N\Big (G_N(z)-G_N(0)\Big) \; \simeq \; - \frac{z}{4N}\ .
\qe
Thus, by plugging \eqref{preapprox Bessel} into \eqref{int change Bessel}, we obtain the approximation
\begin{multline}
\label{KN Bessel int 2}
\frac{1}{N^{2}\sigma_N}\K_N\left(\frac{x}{N^{2}\sigma_N},\frac{y}{N^{2}\sigma_N}\right)\\
\simeq \frac{1}{(2i\pi)^2}\oint_{\phi_N(\Gamma)}\frac{\d z}z\oint_{\phi_N(\Theta)} \frac{\d w}w\, \frac{1}{z-w} \left(\frac{z}{w}\right)^\alpha e^{-\frac xz+\frac yw+\frac z4-\frac w4},
\end{multline}
and we can almost see the Bessel kernel \eqref{Bessel cont}, up to contour deformations and the prefactor $(y/x)^{\alpha/2}$. Finally, in order to deal with that prefactor, one considers the operator $\E$ of multiplication by $x^{\alpha/2}$ acting on $L^2(0,s)$, and then use that replacing the Bessel kernel $\K_\Be^{(\alpha)}(x,y)$ by the kernel of the operator $\E\K_\Be^{(\alpha)}\E^{-1}$, which is \eqref{Bessel cont} without the prefactor $(y/x)^{\alpha/2}$, leaves the Fredholm determinant $\det(I-\K_\Be^{(\alpha)})_{L^2(0,s)}$ invariant.
To make this heuristic rigorous, the method is far less demanding than in the
setting of the Airy or the Pearcy kernel. Indeed, in the present setting one
can legally deform the contours $\Gamma$ and $\Theta$ in such a way that
$\phi_N(\Gamma)$ and $\phi_N(\Theta)$ match the integration contours for the
Bessel kernel \eqref{Bessel cont}. After making this deformation,
a simple Taylor expansion of the map $G_N$ around zero will be enough to
establish the convergence towards the Bessel kernel and therefore to
obtain~\eqref{Bessel first}.
By pushing the Taylor expansion \eqref{preapprox Bessel} one step further,
one can also obtain the more accurate estimate~\eqref{Bessel next}, provided with an identity involving the resolvent of the Bessel kernel established by Tracy and Widom \cite{TW2}. We refer the reader to \cite{HHN2} for further information.
\section{Open questions}
\label{OP}
The results presented here naturally entail a number of open questions that we list below.
\begin{enumerate}
\item {\bf At the edge of the definition domain and exotic local behaviors}. The results on the eigenvalues local behavior presented in this survey only concern edges or cusp points $\frak a$ which read $\frak a=g(\frak c)$ with $\frak c\in D$. If we focus on the rightmost edge for the sake of simplicity, then Proposition \ref{prop global R} states this edge may actually belong to $g(\partial D)$ (notice that this cannot happen if the limiting spectral distribution $\nu$ of $\bv\Sigma_N$ is a finite combination of Dirac measures). In this case, the square root behavior of the density around this edge is not guaranteed anymore, and the laws describing the fluctuations of the eigenvalues near such an edge seem completely unknown and a priori different from Tracy-Widom distribution. We believe the fluctuations will actually depend on $\nu$, and hence lie outside of the random matrix universality class. Quite interestingly, the same phenomenon arise in the study of the additive deformation of a GUE random matrix \cite{capitaine-peche-2014-preprint} and random Gelfand-Tsetlin patterns \cite{DM1,DM2}.
\item {\bf Alternative regime at a cusp point I}. In the context of Theorem \ref{main cor}, our speed assumption \eqref{speed asump} does not cover the following case
$$
\sqrt N\, g_N'(\frak c_N)\xrightarrow[N\to\infty]{} +\infty \ .
$$
This condition corresponds to the situation where the density of the deterministic equivalent $\mu_N$ is positive in a neighborhood of $\frak c_N$.
It essentially states that the bulk of $\mu_N$ will degenerate into a cusp around $g(\frak c)$ quite slowly and we do not expect to witness
Pearcey-like fluctuations around $g_N(\frak c_N)$ anymore. We believe instead that the sine kernel will arise at the scale $\sqrt{N/g_N'(\frak c_N)}$, which strictly lies in between $N^{1/2}$ and $N^{3/4}$.
\item {\bf Alternative regime at a cusp point II}. Another case that is not covered by our assumption \eqref{speed asump} is when
$$
\sqrt N\, g_N'(\frak c_N)\xrightarrow[N\to\infty]{} -\infty \ .
$$
In this case, $\frak c_N$ lies outside the support of $\mu_N$ and $g'_N$ has two distinct real zeroes near $\frak c_N$, say $\frak c_{N,1}$ and $\frak c_{N,2}$.
Hence, for $N$ sufficiently large, $g_N(\frak c_{N,1})$ and $g_N(\frak c_{N,2})$ both correspond to edges of the support of $\mu_N$ which both converge towards the cusp point $g(\frak c)$. The previous condition entails that such a convergence will happen at a quite slow rate. In this case, we do not expect to observe the Pearcey kernel around $g_N(\frak c_N)$ either, because of the absence of particles, but we believe a local analysis around the edge $g_N(\frak c_{N,1})$ or $g_N(\frak c_{N,2})$ may uncover the Airy kernel at an intermediary scale.
\item {\bf Study of the fluctuations at the hard edge in more general cases}.
The hard edge fluctuations were described here when
$n = N + \alpha$ with a fixed $\alpha\in \mathbb Z$ , but the hard edge is always present as soon as $n/N\to1$. Thus it would be of interest to describe the hard edge fluctuations in more
general situations, for example when $\alpha=\alpha(n) \to +\infty$ so that $n/N \to 1$. In this case one would expect Tracy-Widom fluctuations near the leftmost edge $\frak a_N$ of $\mu_N$, the latter being positive but converging to zero as $N\to \infty$.
\item {\bf Non-Gaussian entries}.
All the fluctuations results presented here rely on the fact that the entries
of matrix $\bv X_N$ are complex Gaussian. It is however of major interest, for
applications and for the general theory as well, to study the universality of
such results for non-Gaussian complex random variables. As explained in Remark
\ref{KY univ}, the Tracy-Widom fluctuations for the extremal eigenvalues
associated with any regular soft edges are now established in this general
setting (under some moment conditions for the entries), by combining theorems
\ref{th:fluctuations-TW} and \ref{th:fluctuations-TW2} with Knowles and Yin's
recent preprint \cite{knowles-yin-2014-preprint} (see also \cite{bao-et-al}).
However, natural related questions remain open: Would it be
possible to describe the fluctuations at
\begin{itemize}
\item[(a)] the hard edge, for general complex entries?
\item[(b)] a (regular) cusp point, for general complex entries?
\end{itemize}
Another universality class of interest is the case where the matrix $\bv X_N$ has real entries. In this case, the techniques based on the determinantal structure of the
eigenvalues are no longer available. Lee and Schnelli \cite{lee-schnelli-preprint} recently succeeded to establish GOE Tracy-Widom fluctuations of the largest eigenvalue when the entries are real Gaussian or simply real (with subexponential decay), under the assumption that the covariance matrix $\bv \Sigma_N$ is diagonal. The techniques developed by Knowles and Yin \cite{knowles-yin-2014-preprint} enable to relax the diagonal assumption for the covariance matrix
$\bv \Sigma_N$. A number of questions remain open: Would it be possible to describe the fluctuations at
\begin{itemize}
\item[(c)] any (regular) soft edge, when the entries are real Gaussian?
\item[(d)] the hard edge, when the entries are real (Gaussian or not)?
\item[(e)] a (regular) cusp point, when the entries are real (Gaussian or not)?
\end{itemize}
\end{enumerate} | {"config": "arxiv", "file": "1509.04910/revised_esaim.tex"} |
\begin{document}
\author{Shevlyakov Artem}
\title{On disjunctions of equations over semigroups}
\maketitle
\abstract{A semigroup $S$ is called an equational domain (e.d.) if any finite union of algebraic sets over $S$ is algebraic. For a semigroup $S$ with a finite two-sided ideal we find necessary and sufficient conditions to be an e.d.}
\section{Introduction}
Following~\cite{uniTh_I,uniTh_II}, one can define the notions of an equation and algebraic set for any algebraic structure $\A$ (group, Lie algebra, semigroup, etc.). It allows us to develop algebraic geometry over every algebraic structure.
Algebraic sets have the common properties which hold in every algebraic structure $\A$. For example, the intersection of an arbitrary number of algebraic sets is always algebraic in any algebraic structure $\A$.
However the union of algebraic sets is not algebraic in general. In~\cite{uniTh_IV} it was defined the notion of an equational domain (e.d.). An algebraic structure $\A$ is an e.d. if any finite union of algebraic sets over $\A$ is always algebraic. Moreover, in~\cite{uniTh_IV} it was proved necessary and sufficient conditions for any group (Lie algebra, associative ring) to be an e.d. For instance, equational domains in the class of commutative associative rings are exactly the rings with no zero-divisors.
In~\cite{uniTh_IV} the equational domains in the class of groups were described. By this result, the groups of the next classes are e.d.:
\begin{enumerate}
\item free non-abelian groups (proved by G.~Gurevich, one can see the proof in~\cite{makanin});
\item simple non-abelian groups (it follows from~\cite{rhodes}).
\end{enumerate}
In~\cite{shevl_ED_I} we found necessary and sufficient conditions for a finite ideal-simple semigroup to be an e.d.
The current paper continues the study of~\cite{shevl_ED_I}, and we investigate the properties of e.d. in the class of semigroups with nontrivial ideals. Sections~\ref{sec:basics},~\ref{sec:al_geom}, contains the definitions of semigroup theory and algebraic geometry.
In Section~\ref{sec:non_simple_semigroups} we prove the following: if a semigroup $S$ is an e.d. and has a completely simple kernel $K$ then $K$ is also an e.d. (Theorem~\ref{th:main_new}).
The main result of Section~\ref{sec:action_on_ideal} is Theorem~\ref{th:alpha_sim_beta}. By this theorem, any infinite semigroup with a finite ideal is not an equational domain (Corollary~~\ref{cor:about_infinite_semigroups}).
Finally, in Section~\ref{sec:criterion} we prove the criterion, when a semigroup $S$ with the finite kernel $K$ is an e.d. Precisely, in Theorem~\ref{th:criterion} we prove that necessary conditions of Theorems~\ref{th:main_new},~\ref{th:alpha_sim_beta} are sufficient for such semigroup $S$.
\section{Notions of semigroup theory}
\label{sec:basics}
let us give the classic theorem of semigroup theory
\begin{theorem}
\label{th:sushkevic_rees}
For any completely simple semigroup $S$ there exists a group $G$ and sets $I,\Lambda$ such that $S$ is isomorphic to the set of triples $(\lambda,g,i)$, $g\in G$, $\lambda\in\Lambda$, $i\in I$ with multiplication defined by
\[
(\lambda,g,i)(\mu,h,j)=(\lambda,gp_{i\mu}h,j),
\]
where $p_{i\mu}\in G$ is an element of a matrix $\P$ such that
\begin{enumerate}
\item $\P$ consists of $|I|$ rows and $|\Lambda|$ columns;
\item the elements of the first row and the first column equal $1\in G$ (i.e. $\P$ is {\it normalized}).
\end{enumerate}
\end{theorem}
Following Theorem~\ref{th:sushkevic_rees}, we denote any completely simple semigroup $S$ by $S=(G,\P,\Lambda,I)$. Notice that the cardinality of the set $\Lambda$, ($I$) is equal to the number of minimal right (respectively, left) ideals of a semigroup $S$.
\begin{corollary}
\label{cor:when_is_group}
A completely simple semigroup $S=(G,\P,\Lambda,I)$ is a group iff $|\Lambda|=|I|=1$.
\end{corollary}
The index $\lambda\in\Lambda$, ($i\in I$) of an element $(\lambda,g,i)\in S$ is called {\it the first} (respectively, {\it the second}) index.
\medskip
The minimal ideal (if it exists) of a semigroup $S$ is called the \textit{kernel} and denoted by $Ker(S)$. Clearly, any finite semigroup has the kernel. Obviously, if $S=Ker(S)$ the semigroup is simple. If $Ker(S)$ is a group then $S$ is said to be a \textit{homogroup}.
Below we consider only semigroups with completely simple kernel $K=(G,\P,\Lambda,I)$. For example, any finite semigroup has the finite kernel of the form $K=(G,\P,\Lambda,I)$.
\section{Notions of algebraic geometry}
\label{sec:al_geom}
All definitions below are deduced from the general notions of~\cite{uniTh_I,uniTh_II}, where the definitions of algebraic geometry were formulated for an arbitrary algebraic structure in the language with no predicates.
Semigroups as algebraic structures are often considered in the language $\LL_0=\{\cdot\}$. However, for a given semigroup $S$ one can add to the language $\LL_0$ the set of constants $\{s|s\in S\}$. We denote the extended language by $\LL_S$, and below we consider all semigroups in such language.
Let $X$ be a finite set of variables $x_1,x_2,\ldots,x_n$. \textit{An $\LL_S$-term} in variables $X$ is a finite product of variables and constants $s\in S$. For example, the following expressions $xsy^2x$, $xs_1ys_2x^2$, $x^2yxz$ ($s,s_1,s_2\in S$) are $\LL_S$-terms.
{\it An equation} over $\LL_S$ is an equality of two $\LL_S$-terms $t(X)=s(X)$. {\it A system of equations} over $\LL_S$ ({\it a system} for shortness) is an arbitrary set of equations over $\LL_S$.
A point $P=(p_1,p_2,\ldots,p_n)\in S^n$ is a \textit{solution} of a system $\Ss$ in variables $x_1,x_2,\ldots,x_n$, if the substitution $x_i=p_i$ reduces any equation of $\Ss$ to a true equality in the semigroup $S$. The set of all solutions of a system $\Ss$ in a semigroup $S$ is denoted by $\V_S(\Ss)$. A set $Y\subseteq S^n$ is called {\it algebraic} over the language $\LL_S$ if there exists a system over $\LL_S$ in variables $x_1,x_2,\ldots,x_n$ with the solution set $Y$.
Following~\cite{uniTh_IV}, let us give the main definition of our paper.
A semigroup $S$ is an {\it equational domain} ({\it e.d.} for shortness) in the language $\LL_S$ if for all algebraic sets $Y_1,Y_2,\ldots,Y_n$ the union $Y=Y_1\cup Y_2\cup\ldots\cup Y_n$ is algebraic.
The next theorem contains necessary and sufficient conditions for a semigroup to be an e.d.
\begin{theorem}\textup{\cite{uniTh_IV}}
\label{th:about_M}
A semigroup $S$ in the language $\LL_S$ is an e.d. iff the set
\[
\M_{sem}=\{(x_1,x_2,x_3,x_4)|x_1=x_2\mbox{ or }x_3=x_4\}\subseteq S^4
\]
is algebraic, i.e. there exists a system $\Ss$ in variables $x_1,x_2,x_3,x_4$ with the solution set $\M_{sem}$.
\end{theorem}
Below we will study equations over groups, therefore we have to give some definitions of algebraic geometry over groups. Any group $G$ below will be considered in the language $\LL_G=\{\cdot,^{-1},1\}\cup\{g|g\in G\}$. \textit{An $\LL_G$-term} in variables $X=\{x_1,x_2,\ldots,x_n\}$ is a finite product of variables in integer powers and constants $g\in G$. In other words, an $\LL_G$-term is an element of the free product $F(X)\ast G$, where $F(X)$ is a free group generated by the set $X$.
The definitions of equations, algebraic sets and equational domains over groups are given in the same way as it is over semigroups.
For groups of the language $\LL_G$ we have the following result.
\begin{theorem}\textup{\cite{uniTh_IV}}
A group $G$ of the language $\LL_G$ is an e.d. iff the set
\label{th:criterion_for_groups}
\begin{equation*}
\M_{gr}=\{(x_1,x_2)|x_1=1\mbox{ or }x_2=1\}\subseteq G^2
\end{equation*}
is algebraic, i.e. there exists a system $\Ss$ in variables $x_1,x_2$ with the solution set $\M_{gr}$.
\end{theorem}
One can reformulate Theorem~\ref{th:criterion_for_groups} in simpler form using the next definition. An element $x\neq 1$ of a group $G$ is a \textit{zero-divisor} if there exists $1\neq y\in G$ such that for any $g\in G$ it holds $[x,y^g]=1$ (here $y^g=gyg^{-1}$, $[a,b]=a^{-1}b^{-1}ab$).
\begin{theorem}\textup{\cite{uniTh_IV}}
\label{th:zero_divisors}
A group $G$ in the language $\LL_G$ is an e.d. iff it does not contain zero-divisors.
\end{theorem}
\bigskip
Let us give results of~\cite{shevl_ED_I}, where we studied equational domains in the class of finite simple semigroups.
The matrix $\P$ of a semigroup $S=(G,\P,\Lambda,I)$ is {\it non-singular} if it does not contain two equal rows or columns. The non-singularity of $\P$ is equivalent to the reductivity of the semigroup $S$.
\begin{lemma}\textup{(Lemma~3.4. of~\cite{shevl_ED_I})}
\label{l:exists_2_non_dist_elems}
Suppose the matrix $\P$ of a finite simple semigroup $S=(G,\P,\Lambda,I)$ has equal rows (columns) with indexes $i,j$ (respectively, $\lambda,\mu$). Then for the elements $s_1=(1,1,i)$, $s_2=(1,1,j)$ (respectively, $s_1=(\lambda,1,1)$, $s_2=(\mu,1,1)$) and for an arbitrary $\LL_S$-term $t(x)$ one of the following conditions holds:
\begin{enumerate}
\item $t(s_1)=t(s_2)$;
\item $t(s_1)=(\nu,g,i)$, $t(s_2)=(\nu,g,j)$ for some $g\in G$, $\nu\in\Lambda$ if $t(x)$ ends with a variable $x$ (respectively, $t(s_1)=(\lambda,g,k)$, $t(s_2)=(\mu,g,k)$ for some $g\in G$, $k\in I$ if $t(x)$ begins with $x$).
\end{enumerate}
\end{lemma}
\begin{theorem}\textup{(Theorem~3.1. of~\cite{shevl_ED_I})}
\label{th:main}
A finite completely simple semigroup $S=(G,\P,\Lambda,I)$ is an e.d. in the language $\LL_S$ iff the following two conditions hold:
\begin{enumerate}
\item $\P$ is nonsingular;
\item $G$ is an e.d. in the group language $\LL_G$.
\end{enumerate}
\end{theorem}
Remark that the ``if'' statement of Theorem~\ref{th:main} does not hold for infinite completely simple semigroups.
\begin{example}\textup{(Example 4.11. of~\cite{shevl_ED_I})}
\label{ex:domain_240}
Define a finite simple semigroup $S_{240}=(A_5,\P,\{1,2\},\{1,2\})$, where $A_5$ is the alternating group of degree $5$,
\[\P=\begin{pmatrix}1&1\\1&g\end{pmatrix},\]
and $g\neq 1$. By Theorem~\ref{th:main}, $S_{240}$ is an e.d. and $|S_{240}|=|A_5|\cdot 2\cdot 2=240$.
\end{example}
\begin{corollary}\textup{(Corollary~5.3. of~\cite{shevl_ED_I})}
\label{cor:about_homogroups}
If a homogroup $S$ is an e.d. then $S$ is a group, and $S=Ker(S)$.
\end{corollary}
\begin{corollary}\textup{(Corollary~5.3. of~\cite{shevl_ED_I})}
\label{cor:zero}
Any nontrivial semigroup $S$ with a zero is not an e.d. in the language $\LL_S$.
\end{corollary}
\section{Kernels of equational domains}
\label{sec:non_simple_semigroups}
It is easy to see that the set
\begin{equation}
\label{eq:Gamma}
\Gamma=\{(1,g,1)|g\in G\}\subseteq K
\end{equation}
is isomorphic to $G$. Since $\P$ is normalized, $(1,1,1)$ is the identity of $\Gamma$.
Let
\[
L_i=\{(\lambda,g,i)|g\in G, \lambda\in\Lambda\}\subseteq K,
\]
\[
R_\lambda=\{(\lambda,g,i)|g\in G, i\in I\}\subseteq K.
\]
Obviously, $L_1\cap R_1=\Gamma$. By the properties of the kernel, any $L_i$ ($R_\lambda$) is a left (respectively, right) ideal of the semigroup $S$.
\begin{lemma}
\label{l:properties_of_multiplication}
Let $K=(G,\P,\Lambda,I)$ be a kernel of a semigroup $S$. Hence, for any $\alpha\in S$ there exist elements $g_\alpha\in G$, $\lambda_\alpha\in\Lambda$, $i_\alpha\in I$ such that
\begin{enumerate}
\item $\alpha(1,1,1)=(\lambda_\alpha,g_\alpha,1)$,
\item $\alpha(1,g,i)=(\lambda_\alpha,g_\alpha g,i)$,
\item $(1,1,1)\alpha=(1,g_\alpha,i_\alpha)$,
\item $(\lambda,g,1)\alpha=(\lambda,gg_\alpha,i_\al)$,
\end{enumerate}
\end{lemma}
\begin{proof}
Let us prove all statements of the lemma.
\begin{enumerate}
\item Since the set $L_1$ is a left ideal, the element $\alpha(1,1,1)$ equals to the expression $(\lambda_\alpha,g_\alpha,1)$ for some $\lambda_\al,g_\al$.
\item
\[
\al(1,g,i)=\alpha(1,1,1)(1,g,i)=
(\lambda_\alpha,g_\alpha,1)(1,g,i)=(\lambda_\alpha,g_\alpha g,i).
\]
\item Since $R_1$ is a right ideal, $(1,1,1)\alpha=(1,h_\alpha,i_\alpha)$ for some $h_\alpha,i_\alpha$. Let us prove that $h_\al=g_\al$:
\[
(1,1,1)(\al(1,1,1))=(1,1,1)(\lambda_\al,g_\al,1)=(1,g_\al,1).
\]
On the other hand, computing
\[
((1,1,1)\al)(1,1,1)=(1,h_\alpha,i_\alpha)(1,1,1)=(1,h_\al,1),
\]
we obtain $h_\al=g_\al$.
\item
\[
(\lambda,g,1)\al=(\lambda,g,1)(1,1,1)\al=(\lambda,g,1)(1,g_\alpha,i_\alpha)=(\lambda,gg_\al,i_\al).
\]
\end{enumerate}
\end{proof}
According to Lemma~\ref{l:properties_of_multiplication} for any $\al\in S$ we have:
\begin{equation}
\label{eq:relation_for_Lambda1}
\alpha x=(\lambda_\alpha,g_\alpha,1)x\mbox{ for each }x\in L_1
\end{equation}
\begin{equation}
\label{eq:relation_for_I1}
x\alpha=x(1,g_\alpha,i_\alpha)\mbox{ for each }x\in R_1
\end{equation}
\begin{equation}
\label{eq:relation_for_Gamma}
\alpha x=(\lambda_\alpha,g_\alpha,1)x,\;x\alpha=x(1,g_\alpha,i_\alpha)\mbox{ for each }x\in \Gamma
\end{equation}
\begin{lemma}\textup{(Lemma~3.6. of~\cite{shevl_ED_I})}
\label{l:about_equiv_over_Gamma}
Let $S=(G,\P.\Lambda,I)$ be a completely simple semigroup, and $x,y\in\Gamma$. Then
\begin{enumerate}
\item $x(\lambda,c,i)y=x(1,c,1)y$;
\item if an equation
\begin{equation*}
(\lambda,c,i)t(x,y)=(\lambda^\pr,c^\pr,i^\pr)t^\pr(x,y) \; (respectively, t(x,y)(\lambda,c,i)=t^\pr(x,y)(\lambda^\pr,c^\pr,i^\pr)
)
\end{equation*}
is consistent over $S$, then it is equivalent to
\[
(1,c,1)t(x,y)=(1,c^\pr,1)t^\pr(x,y)\; (\mbox{respectively, } t(x,y)(1,c,1)=t^\pr(x,y)(1,c^\pr,1))
\]
over the group $\Gamma$.
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{l:S-eq_dom->G-eq_dom_new}
Suppose a semigroup $S$ with the kernel $K=(G,\P,\Lambda,I)$ is an equational domain in the language $\LL_S$. Then the group $G$ is an e.d. in the language $\LL_G$.
\end{lemma}
\begin{proof}
As $S$ is an e.d., the set of pairs $\M=\{(x,y)|x=(1,1,1)\mbox{ or }y=(1,1,1)\}\subseteq S^2$ is algebraic over $S$, i.e. there exists a system $\Ss(x,y)$ over $\LL_S$ such that $\V_S(\Ss)=\M$.
Below we rewrite $\Ss(x,y)$ into a system with constants from $\Gamma$.
Let us show that $\Ss$ does not contain any equation of the form $t(x,y)=\al$, where $\al\in S\setminus K$. Indeed, the value of a term $t(x,y)$ at $((1,1,1),(1,1,1))\in\M$ belongs to the kernel $K$, and the equality $t((1,1,1),(1,1,1))=\al$ is impossible.
Thus, all parts of equations of $\Ss$ contain occurrences of variables. Applying Lemma~\ref{l:about_equiv_over_Gamma}, we obtain that $\Ss$ is equivalent over $\Gamma$ to a system $\tilde{\Ss}$ whose constants belong to $K$.
If an equation of the form
\begin{equation}
(\lambda,g,i)t^\pr(x,y)=xs^\pr(x,y)
\label{eq:cdssadfsa}
\end{equation}
belongs to $\tilde{\Ss}$, it is not satisfied by the point $((\mu,h,j),(1,1,1))\in\M$, where $\mu\neq\lambda$. Thus, $\tilde{\Ss}$ does not contain equations of the form~(\ref{eq:cdssadfsa}). It allows us to apply the formulas from Lemma~\ref{l:about_equiv_over_Gamma} to $\tilde{\Ss}$ and obtain a system $\Ss^\pr$ whose constants belong to the group $\Gamma$. Moreover, $\Ss^\pr$ is equivalent to $\Ss$ over the group $\Gamma$.
Finally, we have $\V_\Gamma(\Ss^\pr)=\{(x,y)|x=(1,1,1)\mbox{ or }y=(1,1,1)\}\subseteq\Gamma^2$, and, by Theorem~\ref{th:criterion_for_groups}, the group $\Gamma$ is an equational domain in the language $\LL_\Gamma$. The isomorphism between the groups $\Gamma,G$ proves the lemma.
\end{proof}
\begin{lemma}
\label{l:singular->not_ED_new}
If a semigroup $S$ with the kernel $K=(G,\P,\Lambda,I)$ is an e.d. in the language $\LL_S$, then the matrix $\P$ is nonsingular.
\end{lemma}
\begin{proof}
Assume that $S$ is an e.d. with a singular matrix $\P$, and the $i$-th, $j$-th rows of $\P$ are equal (similarly, one can consider a matrix $\P$ with two equal columns).
Since the semigroup $S$ is an e.d., there exists a system of equations $\Ss(x,y,z)$ with the solution set
\[
\M=\{(x,y,z)|x=(1,1,i)\mbox{ or }y=(1,1,i)\mbox{ or }z=(1,1,i)\}.
\]
Let $t(x,y,z)=s(x,y,z)\in\Ss(x,y,z)$ such that $t(x,y,z)=s(x,y,z)$ is not satisfied by the point $Q=((1,1,j),(1,1,j),(1,1,j))$.
By the formula~(\ref{eq:relation_for_I1}), $t(x,y,z)=s(x,y,z)$ is equivalent over the subsemigroup $R_1$ to the one of the following equations:
\begin{enumerate}
\item $t^\pr(x,y,z)=s^\pr(x,y,z)$;
\item $\alpha t^\pr(x,y,z)=\beta s^\pr(x,y,z)$;
\item $\al t^\pr(x,y,z)=s^\pr(x,y,z)$;
\item $t^\pr(x,y,z)=\beta s^\pr(x,y,z)$;
\item $\al t^\pr(x,y,z)=\beta$;
\item $t^\pr(x,y,z)=\beta$,
\end{enumerate}
where all constants of the terms $t^\pr(x,y,z),s^\pr(x,y,z)$ belong to the kernel $K$, and $\al,\beta\in S\setminus K$.
Consider only the second type of the equation $t(x,y,z)=s(x,y,z)$ (similarly, one can consider the other types).
Without loss of generality we can assume that neither $t^\pr(x,y,z)$ nor $s^\pr(x,y,z)$ ends by $z$.
Consider the following terms in one variable $t^{\pr\pr}(z)=t^\pr((1,1,j),(1,1,j),z)$, $s^{\pr\pr}(z)=s^\pr((1,1,j),(1,1,j),z)$. Each constant of the terms $t^{\pr\pr}(z)$, $s^{\pr\pr}(z)$ belong to the kernel $K$, and the terms end with constants. By Lemma~\ref{l:exists_2_non_dist_elems} we have the equalities
\[
t^{\pr\pr}((1,1,i))=t^{\pr\pr}((1,1,j)),\; s^{\pr\pr}((1,1,i))=s^{\pr\pr}((1,1,j)).
\]
Since $((1,1,j),(1,1,j),(1,1,i))\in\M$, we have
\[
\al t^\pr((1,1,j),(1,1,j),(1,1,i))=\beta s^\pr((1,1,j),(1,1,j),(1,1,i)).
\]
Using the equalities above, we obtain
\[
\al t^\pr((1,1,j),(1,1,j),(1,1,j))=\beta s^\pr((1,1,j),(1,1,j),(1,1,j)),
\]
which contradicts the choice of the equation $t(x,y,z)=s(x,y,z)$.
\end{proof}
\begin{theorem}
\label{th:main_new}
Suppose a semigroup $S$ has the finite kernel $K=(G,\P,\Lambda,I)$, and $S$ is an e.d. in the language $\LL_S$. Then $K$ is an e.d. in the language $\LL_K$.
\end{theorem}
\begin{proof}
By Lemma~\ref{l:S-eq_dom->G-eq_dom_new}, the group $G$ is an e.d. in the language $\LL_G$. Lemma~\ref{l:singular->not_ED_new} gives us the non-singularity of the matrix $\P$. Finally, Theorem~\ref{th:main} concludes the proof.
\end{proof}
A semigroup $S$ is called a {\it proper e.d.}, if $S$ is an e.d. in the language $\LL_S$ and $S$ is not a group.
\begin{corollary}
\label{cor:from_main_new}
If $S$ is a proper e.d. then $|S|\geq 240$.
\end{corollary}
\begin{proof}
Let $K=(G,\P,\Lambda,I)$ be the kernel of the semigroup $S$. By Theorem~\ref{th:main_new}, we have that
\begin{enumerate}
\item the matrix $\P$ is nonsingular. It means that either $\P$ has at least two rows and columns or $|\Lambda|=|I|=1$;
\item the group $G$ is an e.d. in the language $\LL_G$.
\end{enumerate}
If $|\Lambda|=|I|=1$ the kernel is a group, therefore $S$ becomes a homogroup. By Corollary~\ref{cor:about_homogroups}, we obtain $S=K$, and $S$ is not a proper e.d.
Thus, the matrix $\P$ contains at least two rows and columns. Therefore, the group $G$ is not trivial
(the triviality of $G$ makes $\P$ singular).
Since the alternating group $A_5$ is a nontrivial e.d. of the minimal order, we obtain the next estimation:
\[|S|\geq|K|=|G||\Lambda||I|\geq|A_5||\Lambda||I|\geq 60\cdot 2\cdot 2=240.\]
Recall that we cannot improve the estimation above (see Example~\ref{ex:domain_240}).
\end{proof}
\section{Inner translations of ideals}
\label{sec:action_on_ideal}
Suppose a semigroup $S$ has an ideal $I$ and $\al\in S$. A map $l_\al(x)\colon I\to I$ ($r_\al(x)\colon I\to I$) is called a \textit{left (respectively, right) inner translation of the ideal $I$} if $l_\al(x)=\alpha x$ (respectively, $r_\al(x)=x\alpha$) for $x\in I$. Let $\al\sim_I \beta$ denote elements $\al,\beta\in S$ such that
\[
\al x=\beta x, \; \mbox{ and } x\al=x\beta
\]
for any $x\in I$. If $\al\sim_I \beta$ the elements $\al,\beta$ are called $I$-{\it equivalent}. The equivalence relation $\sim_I$ is \textit{trivial} if any equivalence class consists of a single element, i.e. a trivial equivalence relation $\sim_I$ coincides with the equality relation over a semigroup $S$.
\begin{remark}
Notice that the triviality of $\sim_I$ relation is close to the definition of a weakly reductive semigroup. Namely, a semigroup $S$ is weakly reductive iff the relation $\sim_S$ is trivial.
\end{remark}
\begin{lemma}
\label{l:zamena}
Suppose a semigroup $S$ has an ideal $I$ and $\al\sim_I\beta$. Then for any term $t(x,y)$ containing occurrences of the variable $y$ we have
\begin{equation}
t(\al,r)=t(\beta,r)
\label{eq:t(al,r)=t(beta,r)}
\end{equation}
for all $r\in I$.
\end{lemma}
\begin{proof}
Let $t(x,y)$ be a term of the language $\LL_S$. The {\it length} $|t(x,y)|$ of a term $t(x,y)$ is the length of the word $t(x,y)$ is the alphabet $X\cup\{s|s\in S\}$. For example, $|xs_1y^2|=4$, $|x|=|s_2|=1$.
\begin{enumerate}
\item Let $t(x,y)=v(x)y^n$. We prove the statement of the lemma by induction on the length of $v(x)$.
If $|v(x)|=1$, then $v(x)$ is either a constant $\c$ or $v(x)=x$. If $v(x)=\c$ the equality~(\ref{eq:t(al,r)=t(beta,r)}) obviously holds. If $v(x)=x$, by the condition of the lemma, we have $\al r=\beta r$ and obtain~(\ref{eq:t(al,r)=t(beta,r)}).
Assume that~(\ref{eq:t(al,r)=t(beta,r)}) holds for any term of the length less than $n$. Let $|v(x)|=n$. We have either $v(x)=v^\pr(x)\c$ or $v(x)=v^\pr(x)x$, where $|v^\pr(x)|=n-1$.
If $v(x)=v^\pr(x)\c$, then we use $\c r^n\in I$ and apply the induction hypothesis:
\[
t(\al,r)=v^\pr(\al)(\c r^n)=v^\pr(\beta)(\c r^n)=t(\beta,r).
\]
For $v(x)=v^\pr(x)x$ the proof is similar.
\item If $t(x,y)=y^n v(x)$ the proof is similar to the reasonings of the previous case.
\item Consider the most general form of the term $t(x,y)$:
\[
t(x,y)=w_1(x)y^{n_1}w_2(x)y^{n_2}\ldots w_m(x)y^{n_m}w_{m+1}(x),
\]
where the terms $w_1(x),w_{m+1}(x)$ may be empty.
Let us prove the statement of the lemma by the induction on $m$. If $m=1$, the term $t(x,y)$ is reduced to the terms from the previous cases.
Assume $t^\pr(\al,r)=t^\pr(\beta,r)=r^\pr\in I$, where
\[
t^\pr(x,y)=w_1(x)y^{n_1}w_2(x)y^{n_2}\ldots w_{m-1}(x)y^{n_{m-1}}w_m(x).
\]
Let us consider the term $s(x,y)=y^{n_m} w_{m+1}(x)$. As we proved above,
\[
s(\al,r)=s(\beta,r).
\]
Thus,
\[
t(\al,r)=r^{\pr}s(\al,r)=r^{\pr}s(\beta,r)=t(\beta,r),
\]
which proves~(\ref{eq:t(al,r)=t(beta,r)}).
\end{enumerate}
\end{proof}
Notice that the statement of Lemma~\ref{l:zamena} fails for terms $t(x,y)$ which do not depend on the variable $y$.
\begin{theorem}
\label{th:alpha_sim_beta}
If a semigroup $S$ is an e.d. in the language $\LL_S$, the equivalence relation $\sim_I$ is trivial for any ideal $I\subseteq S$.
\end{theorem}
\begin{proof}
Since the theorem is obviously holds for trivial semigroup, further we put $|S|>1$.
Let $X$ denote the set of four variables $\{x_1,x_2,x_3,x_4\}$.
If $I=\{r\}$ the element $r$ is the zero of $S$. By Corollary~\ref{cor:zero}, $S$ is not an e.d. Let $r_1,r_2$ be two distinct elements of the ideal $I$.
Assume the converse: there exist distinct elements $\alpha,\beta\in S$ with $\al\sim_I\beta$.
By the condition, the set $\M_{sem}=\{(x_1,x_2,x_3,x_4)|x_1=x_2\mbox{ or }x_3=x_4\}$ is the solution set of a system $\Ss(X)$, and an equation $t(X)=s(X)\in\Ss$ is not satisfied by the point $(\al,\beta,r_1,r_2)\notin \M_{sem}$.
Let $\Var(t)$ be the set of variabless occurring in a term $t$.
For the equation $t(X)=s(X)$ we have one of the following conditions:
\begin{enumerate}
\item a variable $x_i\in X$ does not occur in the equation;
\item all intersections $\Var(t)\cap\{x_1,x_2\}$, $\Var(s)\cap\{x_1,x_2\}$, $\Var(t)\cap\{x_3,x_4\}$, $\Var(s)\cap\{x_3,x_4\}$ are nonempty;
\item $\Var(t)\cap\{x_1,x_2\}\neq\emptyset$, $\Var(s)\cap\{x_1,x_2\}\neq\emptyset$, $\Var(t)\cap\{x_3,x_4\}=\emptyset$ , $\Var(s)\cap\{x_3,x_4\}\neq\emptyset$;
\item $\Var(t)\cap\{x_3,x_4\}\neq\emptyset$, $\Var(s)\cap\{x_3,x_4\}\neq\emptyset$, $\Var(t)\cap\{x_1,x_2\}\neq\emptyset$ , $\Var(s)\cap\{x_1,x_2\}=\emptyset$;
\item one part of the equation contains only the variables $\{x_1,x_2\}$, and the another part contains only $\{x_3,x_4\}$;
\item one of the parts of the equation does not contain any variable.
\end{enumerate}
Let us consider all types of $t(X)=s(X)$.
\begin{enumerate}
\item Without loss of generality one can assume that $t(X)=s(X)$ does not contain the occurrences of $x_4$, i.e. $t(x_1,x_2,x_3)=s(x_1,x_2,x_3)$. Since $(\al,\beta,r_1,r_1)\in\M_{sem}$, we have $t(\al,\beta,r_1)=s(\al,\beta,r_1)$. However the point $(\al,\beta,r_1,r_2)$ does not satisfy the equation $t(X)=s(X)$, and we obtain the contradiction $t(\al,\beta,r_1)\neq s(\al,\beta,r_1)$.
\item By Lemma~\ref{l:zamena} we have:
\[
t(\al,\al,r_1,r_2)=t(\al,\beta,r_1,r_2),\; s(\al,\al,r_1,r_2)=s(\al,\beta,r_1,r_2).
\]
Since $(\al,\al,r_1,r_2)\in\M_{sem}$, we have $t(\al,\al,r_1,r_2)=s(\al,\al,r_1,r_2)$. Therefore,
\[
t(\al,\beta,r_1,r_2)=s(\al,\beta,r_1,r_2),
\]
which contradicts the choice of the equation $t(X)=s(X)$.
\item Let $t(X)=s(X)$ be $t(x_1,x_2)=s(x_1,x_2,x_3,x_4)$.
By Lemma~\ref{l:zamena}, we have:
\[
s(\al,\al,r_1,r_1)=s(\al,\beta,r_1,r_1).
\]
Since $(\al,\al,r_1,r_1),(\al,\beta,r_1,r_1)\in\M_{sem}$, we obtain
\[
t(\al,\al)=s(\al,\al,r_1,r_1),\; t(\al,\beta)=s(\al,\beta,r_1,r_1),
\]
and $t(\al,\al)=t(\al,\beta)$.
Since $(\al,\al,r_1,r_2)\in\M_{sem}$, Lemma~\ref{l:zamena} gives us the equalities
\[
t(\al,\al)=s(\al,\al,r_1,r_2)=s(\al,\beta,r_1,r_2).
\]
Therefore,
\[
t(\al,\beta)=s(\al,\beta,r_1,r_2),
\]
which contradicts the choice of the equation $t(X)=s(X)$.
\item Suppose the equation $t(X)=s(X)$ is $t(x_1,x_2,x_3,x_4)=s(x_3,x_4)$. For the point $(\al,\al,r_1,r_2)\in\M_{sem}$ we have
\[t(\al,\al,r_1,r_2)=s(r_1,r_2).\]
By Lemma~\ref{l:zamena}, we obtain
\[t(\al,\al,r_1,r_2)=t(\al,\beta,r_1,r_2),\]
therefore,
\[t(\al,\beta,r_1,r_2)=s(r_1,r_2),\]
which contradicts the choice of the the equation $t(X)=s(X)$.
\item Let the equation $t(X)=s(X)$ be $t(x_1,x_2)=s(x_3,x_4)$. For the points $(\al,\al,r_1,r_1),(\al,\beta,r_1,r_1),(\al,\al,r_1,r_2)\in\M_{sem}$ we have the equalities
\[
t(\al,\al)=s(r_1,r_1),\; t(\al,\beta)=s(r_1,r_1),\;t(\al,\al)=s(r_1,r_2),
\]
therefore, $t(\al,\beta)=s(r_1,r_2)$, which contradicts the choice of the equation $t(X)=s(X)$.
\item Assume that $t(X)=s(X)$ is $t(x_1,x_2,x_3,x_4)=\c$, where $\c\in S$.
According to Lemma~\ref{l:zamena} the equality $t(\al,\al,r_1,r_2)=\c$ implies $t(\al,\beta,r_1,r_2)=\c$. The last equality contradicts the choice of the equation $t(X)=s(X)$.
\end{enumerate}
\end{proof}
\begin{corollary}
\label{cor:S>l^2l}
Let $I$ be a finite ideal of a semigroup $S$, $|I|=l$. If $|S|>l^{2l}$ the semigroup $S$ is not an e.d. in the language $\LL_S$.
\end{corollary}
\begin{proof}
Let us show that the condition $|S|>l^{2l}$ implies the existence of two distinct elements $\al,\beta$ with $\al\sim_I\beta$.
Indeed, any left inner translation $l_\al$ of the ideal $I$ is a function $l_\al\colon I\to I$. The number of different mappings over the set $I$ equals $l^l$. Similarly, the number of different right inner translations of $I$ is also equal to $l^l$. Hence, the number of $\sim_I$-classes is not more than $l^l\cdot l^l=l^{2l}$.
Thus, the equality $|S|>l^{2l}$ implies the existence of two distinct elements $\al,\beta$ with $\al\sim_I\beta$. By Theorem~\ref{th:alpha_sim_beta}, $S$ is not an e.d. in the language $\LL_S$.
\end{proof}
\begin{corollary}
\label{cor:about_infinite_semigroups}
Any infinite semigroup $S$ with a finite ideal $I$ is not an e.d. in the language $\LL_S$.
\end{corollary}
\bigskip
Let us improve the estimation of Corollary~\ref{cor:S>l^2l} for a semigroup $S$ with the finite kernel $K$.
\begin{lemma}
\label{l:about_action}
Suppose a semigroup $S$ has the finite kernel $K=(G,\P,\Lambda,I)$. Then, for any $\alpha\in S$ there exist an element $g_\alpha\in G$ and mappings $\Lambda_\alpha\colon\Lambda\to\Lambda$, $I_\al\colon I\to I$ such that
\begin{enumerate}
\item $\alpha(\lambda,1,1)=(\Lambda_\al(\lambda),g_\al p_{I_\al(1)\lambda},1)$,
\item $(1,1,i)\alpha=(1,p_{i\Lambda_\al(1)}g_\al,I_\al(i))$,
\item $\al(\lambda,g,i)=(\Lambda_\al(\lambda),g_\al p_{I_\al(1)\lambda}g,i)$,
\item $(\lambda,g,i)\al=(\lambda,gp_{i\Lambda_\al(1)}g_\al,I_\al(i))$.
\end{enumerate}
\end{lemma}
\begin{proof}
Clearly, $I_\al(1)=i_\al$, $\Lambda_\al(1)=\lambda_\al$, where the indexes $i_\al,\lambda_\al$ are defined in Lemma~\ref{l:properties_of_multiplication}.
Let us prove the statements of the lemma.
\begin{enumerate}
\item Since $L_1$ is a left ideal, $\al(\lambda,1,1)\in L_1$. Then $\al(\lambda,1,1)=(\Lambda_\al(\lambda),G_\al(\lambda),1)$, where $G_\al(\lambda)\colon\Lambda\to G$ is a map depending on $\lambda$.
We have
\[
(1,1,1)(\al(\lambda,1,1))=(1,1,1)(\Lambda_\al(\lambda),G_\al(\lambda),1)=(1,G_\al(\lambda),1).
\]
Using Lemma~\ref{l:properties_of_multiplication},
\[
((1,1,1)\al)(\lambda,1,1)=(1,g_\al,i_\al)(\lambda,1,1)=(1,g_\al p_{i_\al\lambda},1)=(1,g_\al p_{I_\al(1)\lambda},1),
\]
we obtain $G_\al(\lambda)=g_\al p_{I_\al(1)\lambda}$.
\item Since $R_1$ is a right ideal, $(1,1,i)\al\in R_1$. Then $(1,1,i)\al=(1,G_\al^\pr(i),I_\al(i))$, where $G_\al^\pr\colon I\to G$ is map depending on $i$.
We have
\[
((1,1,i)\al)(1,1,1)=(1,G_\al^\pr(i),I_\al(i))(1,1,1)=(1,G_\al(i)^\pr,1).
\]
On the other hand,
\[
(1,1,i)(\al(1,1,1))=(1,1,i)(\lambda_\al,g_\al,1)=(1,p_{i\lambda_\al}g_\al,1)=(1,p_{i\Lambda_\al(1)}g_\al,1),
\]
thus $G_\al^\pr(i)=g_\al p_{i\Lambda_\al(1)}$.
\item
\[
\al(\lambda,g,i)=\al(\lambda,1,1)(1,g,i)=(\Lambda_\al(\lambda),g_\al p_{I_\al(1)\lambda},1)(1,g,i)=(\Lambda_\al(\lambda),g_\al p_{I_\al(1)\lambda}g,i),
\]
\item
\[
(\lambda,g,i)\al=(\lambda,g,1)(1,1,i)\al=(\lambda,g,1)(1,p_{i\Lambda_\al(1)}g_\al,I_\al(i))=(\lambda,gp_{i\Lambda_\al(1)}g_\al,I_\al(i)).
\]
\end{enumerate}
\end{proof}
\begin{theorem}
Suppose a semigroup $S$ has the finite kernel $K=(G,\P,\Lambda,I)$. If $|S|>|G||\Lambda|^{|\Lambda|}|I|^{|I|}$ the semigroup $S$ is not an e.d. in the language $\LL_S$.
\end{theorem}
\begin{proof}
According to Lemma~\ref{l:about_action}, any inner translations of the kernel $K$ is defined by an element $g_\al\in G$ and mappings $\Lambda_\alpha\colon\Lambda\to\Lambda$, $I_\al\colon I\to I$. Therefore, there do not exist more than $|G||\Lambda|^{|\Lambda|}|I|^{|I|}$ different inner translations of the kernel.
Thus, the inequality $|S|>|G||\Lambda|^{|\Lambda|}|I|^{|I|}$ implies an existence of two different elements $\al,\beta\in S$ with the same translation of the kernel. By Theorem~\ref{th:alpha_sim_beta}, we obtain the statement of the theorem.
\end{proof}
\begin{example}
Let $S_{240}$ be a finite simple semigroup defined in Example~\ref{ex:domain_240}. Therefore, any equational domain with the kernel isomorphic to $S_{240}$ contains at most $60\cdot 2^2\cdot 2^2=960$ elements.
\end{example}
\section{Semigroups with finite ideals}
\label{sec:criterion}
Let $K=(G,\P,\Lambda,I)$ be the finite kernel of a semigroup $S$ and $M\subseteq S^n$. By $\T(M,\Gamma)$ denote the set of all terms of the language $\LL_S$ in variables $x_1,x_2,\ldots,x_n$ such that
\begin{enumerate}
\item all constants of a term $t(X)\in\T(M,\Gamma)$ belong to the kernel $K$;
\item the value $t(P)$ of $t(X)\in\T(M,\Gamma)$ belong to the subgroup $\Gamma$ defined by formula~(\ref{eq:Gamma}) for all $P\in M$.
\end{enumerate}
For example,
\[
t(x)=(1,g,2)x(3,h,1)\in \T(S,\Gamma),
\]
\[
s(x,y)=(1,g,1)x^2(3,h,4)y(2,f,1)\T(S^2,\Gamma).
\]
\begin{lemma}
\label{l:exists_dist_term_new}
Suppose a semigroup $S$ has the finite kernel $K=(G,\P,\Lambda,I)$, where the matrix $\P$ is nonsingular and the equivalence relation $\sim_K$ is trivial. Then for any pair of distinct elements $\al,\beta\in S$ there exists a term $t(x)\in\T(S,\Gamma)$ with $t(\al)\neq t(\beta)$.
\end{lemma}
\begin{proof}
Since $\sim_K$ is trivial, there exists an element $(\lambda,g,i)\in K$ with $\al(\lambda,g,i)\neq \beta(\lambda,g,i)$. According to Lemma~\ref{l:about_action}, we have the equalities
\[
\al(\lambda,g,i)=(\Lambda_\al(\lambda),g_\al g,i),
\]
\[
\beta(\lambda,g,i)=(\Lambda_\beta(\lambda),g_\beta g,i).
\]
There are exactly two possibilities.
\begin{enumerate}
\item Let $g_\al\neq g_\beta$. Consider the term $t(x)=(1,1,1)x(\lambda,1,1)\in\T(S,\Gamma)$. We have
\[
t(\al)=(1,1,1)\al(\lambda,1,1)=(1,1,1)(\Lambda_{\al}(\lambda),g_\al,1)=(1,g_\al,1),
\]
\[
t(\beta)=(1,1,1)\beta(\lambda,1,1)=(1,1,1)(\Lambda_{\beta}(\lambda),g_\beta,1)=(1,g_\beta,1),
\]
thus $t(\al)\neq t(\beta)$.
\item Suppose $g_\al=g_\beta=g$ and $\Lambda_\al(\lambda)\neq\Lambda_\beta(\lambda)$. Since the matrix $\P$ is nonsingular, there exists an index $i$ with $p_{i\Lambda_\al(\lambda)}\neq p_{i\Lambda_\beta(\lambda)}$. Consider the term $t(x)=(1,1,i)x(\lambda,1,1)\in\T(S,\Gamma)$. We have
\[
t(\al)=(1,1,i)\al(\lambda,1,1)=(1,1,i)(\Lambda_{\al}(\lambda),g,1)=(1,p_{i\Lambda_\al(\lambda)}g,1),
\]
\[
t(\beta)=(1,1,i)\beta(\lambda,1,1)=(1,1,i)(\Lambda_{\beta}(\lambda),g,1)=(1,p_{i\Lambda_\beta(\lambda)}g,1),
\]
thus $t(\al)\neq t(\beta)$.
\end{enumerate}
\end{proof}
Let $P=(p_1,p_2,\ldots,p_n)\in S^n$. By $\T_P(M,\Gamma)$ (where $P\in M\subseteq S^n$) we denote the set of all terms $t(X)\in\T(S^n,\Gamma)$ such that $t(P)\neq(1,1,1)$, and $t(Q)=(1,1,1)$ for any $Q\in M\setminus\{P\}$.
\begin{lemma}
\label{l:sufficient_conditions_new}
Suppose a finite semigroup $S$ has the kernel $K=(G,\P,\Lambda,I)$, the equivalence relation $\sim_K$ is trivial and the kernel $K$ is an e.d. in the language $\LL_K$. Then for any natural $n$ and an arbitrary point $P=(p_1,p_2,\ldots,p_n)\in S^n$ the set $\T_P(S^n,\Gamma)$
\begin{enumerate}
\item is nonempty;
\item for any term $t(X)\in\T_P(S^n,\Gamma)$ and any $g\in G$ it holds $(1,g,1)t(X)(1,g^{-1},1)\in \T_P(S^n,\Gamma)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The second property of the set $\T_P(S^n,\Gamma)$ easily follows from the first one. Indeed, if $t(X)\in\T_P(S^n,\Gamma)$, then
\[
(1,g,1)t(Q)(1,g^{-1},1)=(1,g,1)(1,1,1)(1,g^{-1},1)=(1,g1g^{-1},1)=(1,1,1),
\]
\[
(1,g,1)t(P)(1,g^{-1},1)=(1,g,1)(1,h,1)(1,g^{-1},1)=(1,ghg^{-1},1)\neq(1,1,1),\mbox{ since }h\neq 1.
\]
Let us prove now $\T_P(S^n,\Gamma)\neq\emptyset$.
Below we shall use the denotation:
\[t^{-1}(X)=t^{|G|-1}(X).\]
Obviously, for any $t(X)\in\T(S^n,\Gamma)$ it holds $t^{-1}(X)\in\T(S^n,\Gamma)$, and
\[
t(X)t^{-1}(X)=t^{-1}(X)t(X)=t^{|G|}(X)=(1,1,1)\mbox{ for all }X\in S^n.
\]
We prove $\T_P(M,\Gamma)\neq\emptyset$ by induction on the cardinality of the set $M\subseteq S^n$. Let $M=\{P,Q\}\subseteq S^n$.
Without loss of generality one can assume that the points $P,Q$ have distinct first coordinates $p_1\neq q_1$. By Lemma~\ref{l:exists_dist_term_new}, there exists a term $t(x)\in\T(S,\Gamma)$ with $t(p_1)\neq t(q_1)$. Let $s(X)=t(x_1)t^{-1}(q_1)\in\T(S,\Gamma)$, and we have $s(P)=t(p_1)t^{-1}(q_1)\neq (1,1,1)$, $s(Q)=t(q_1)t^{-1}(q_1)=(1,1,1)$. Thus, $s(X)\in \T_P(M,\Gamma)$.
Suppose that for any set $M$ with $|M|\leq m$ the statement of the lemma is proved. Let us prove the lemma for a set $M$ with $m+1$ elements.
Let $M=\{P,Q_1,Q_2,\ldots,Q_m\}$. By the assumption of induction, there exist terms
\[t(X)\in\T_P(\{P,Q_2,Q_3,\ldots,Q_m\},\Gamma),s(X)\in\T_P(\{P,Q_1,Q_3,\ldots,Q_m\},\Gamma),\]
with values
\begin{tabular}{ccccccc}
&$P$&$Q_1$&$Q_2$&$Q_3$&$\ldots$&$Q_m$\\
$t(X)$&$(1,g_1,1)$&$(1,h_1,1)$&$(1,1,1)$&$(1,1,1)$&$\ldots$&$(1,1,1)$\\
$s(X)$&$(1,g_2,1)$&$(1,1,1)$&$(1,h_2,1)$&$(1,1,1)$&$\ldots$&$(1,1,1)$
\end{tabular}
One can choose the elements $g_1,g_2\in G$ which do not commute. Indeed, the second property of the set $\T_P(M,\Gamma)$ allows us to take $g_2$ from the conjugacy class $C=\{gg_2g^{-1}|g\in G\}$. If $g_1$ commutes with all elements of $C$ then $g_1$ is a zero-divisor in the group $G$, and, by Theorem~\ref{th:zero_divisors}, $G$ is not an e.d.
The values of the term
\[
p(X)=t^{-1}(X)s^{-1}(X)t(X)s(X)\in\T(S,\Gamma),
\]
are
\begin{tabular}{ccccccc}
&$P$&$Q_1$&$Q_2$&$Q_3$&$\ldots$&$Q_m$\\
$p(X)$&$(1,[g_1,g_2],1)$&$(1,1,1)$&$(1,1,1)$&$(1,1,1)$&$\ldots$&$(1,1,1)$
\end{tabular}
where $[g_1,g_2]=g_1^{-1}g_2^{-1}g_1g_2\neq 1$ is the commutator of $g_1,g_2$ in the group $G$.
Thus, $p(X)\in\T_P(M,\Gamma)$, and we prove the lemma.
\end{proof}
\begin{theorem}
\label{th:criterion}
A semigroup $S$ with the finite kernel $K=(G,\P,\Lambda,I)$ is an e.d. in the language $\LL_S$ iff the next conditions holds:
\begin{enumerate}
\item the kernel $K$ is an e.d. in the language $\LL_K$;
\item the equivalence relation $\sim_K$ is trivial.
\end{enumerate}
\end{theorem}
\begin{proof}
The ``only if'' statement follows from Theorems~\ref{th:main_new},~\ref{th:alpha_sim_beta}.
Let us prove the ``if'' statement of the theorem.
Prove the converse. If $S$ is infinite, the equivalence relation $\sim_K$ for a finite $K$ is nontrivial, and we came to the contradiction. Thus, the semigroup $S$ is finite.
Consider a set $\M_{sem}=\{(x_1,x_2,x_3,x_4)|x_1=x_2\mbox{ or }x_3=x_4\}\subseteq S^4$. By Lemma~\ref{l:sufficient_conditions_new} for any point $P\notin \M_{sem}$ there exists a term $t_P(x_1,x_2,x_3,x_4)\in\T_P(S^4,\Gamma)$. Obviously, the solution set of the system $\Ss=\{t_P(x_1,x_2,x_3,x_4)=(1,1,1)|P\notin \M_{sem}\}$ equals $\M_{sem}$. Thus, the set $\M_{sem}$ is algebraic and by Theorem~\ref{th:about_M} the semigroup $S$ is an e.d.
\end{proof}
\begin{corollary}
Let a semigroup $S$ with the finite kernel $K$ be an e.d. in the language $\LL_S$. Then any nonempty set $M\subseteq S^n$ is defined by a system of the form $\Ss=\{t_i(X)=(1,1,1)|i\in \mathcal{I}\}$, where $t_i(X)\in\T(S^n,\Gamma)$.
\end{corollary}
\begin{proof}
By Corollary~\ref{cor:about_infinite_semigroups}, the semigroup $S$ is finite.
Let $S^n\setminus M=\{P_i|i\in\mathcal{I}\}$. Following Lemma~\ref{l:sufficient_conditions_new}, there exist terms $t_i(X)\in\T_{P_i}(S^n,\Gamma)$ such that the solution set of the equation $t_i(X)=(1,1,1)$ equals $S^n\setminus\{P_i\}$. Thus, the solution set of the system $\Ss=\{t_i(X)=(1,1,1)|i\in \mathcal{I}\}$ coincides with $M$.
\end{proof} | {"config": "arxiv", "file": "1305.6842.tex"} |
TITLE: What is the next step beyond quantum computation?
QUESTION [10 upvotes]: Assuming we develop quantum computers one day, what would be theoretically the next step? Would it be string-theory based computers? How would these computers differ performance-wise (ie what can they possibly do that Quantum Machines cannot)
REPLY [3 votes]: You might be interested in this paper, NP-complete problems and Physical Reality, by Scott Aaronson. It doesn't discuss the next step "after quantum computers" per se, but compares the computational power of various physical theories. It surveys newtonian physics, nonrelativistic quantum mechanics, nonlinear corrections to QM, hidden variable theories, special relativity, quantum gravity, general relativity, and the many-world interpretation.
For example, section 5 (on page 7) examines what happens if quantum mechanics were not strictly linear (answer: if we could perform computations without errors, we could solve NP-complete problems in polynomial time; whether this can be done in a fault-tolerant manner is unknown).
The idea is to implement something like Grover's search algorithm. Suppose we are given a black-box function $f : {0,1}^n \to {0,1}$ and we want to find an input $x$ such that $f(x) = 1$. Using $n$ qubits we can form a superposition over all $2^n$ input states and evaluate $f$ on it so we have the state $\sum_x |x\rangle |f(x)\rangle$; the problem now is that the "answer states" we want, where the last qubit $|f(x)\rangle$ has value $1$, might be in superposition with about $2^n$ "non-answer states". Grover's algorithm amplifies the difference between these states in $O(2^{n/2})$ steps, but this is the best we can do because time evolution in QM preserves the angle between states. In non-linear QM, this restriction is removed and we can potentially amplify the difference much faster.
Section 8 discusses the computational power of closed timelike curves, or time travel, which after all satisfy all the laws of general relativity. Specifically, some people have tried to resolve the grandfather paradox by saying that only consistent histories are allowed; then the idea is to arrange matters so that finding a consistent history, which the universe does for us "for free", also happens to solve a very hard problem, for instance $\text{3SAT}$.
The part that addresses your question most directly is the section on quantum gravity, but unfortunately, it is also the section that contains the least number of concrete results. | {"set_name": "stack_exchange", "score": 10, "question_id": 76938} |
TITLE: Is there a null-set whose translations generate the set of all null-sets?
QUESTION [3 upvotes]: Under the Lebesgue measure, is there a null-set $N$ whose translations generate the set of all null-sets when closed under countable unions and countable intersections?
I know that such an $N$ cannot be countable, since the set of all countable sets produce nothing new under countable unions and countable intersections.
REPLY [4 votes]: No, there isn't. This follows from pure cardinality considerations: Since every subset of the Cantor set is a nullset, and the Cantor set is of cardinality $\frak c$, there are at least $2^{\frak c}$ nullsets (and of course at most $2^{\frak c}$ nullsets). But given a nullset $N$ there are only $\frak c$ translations of $N$ and closing under countable union and intersection won't raise this (for details see the proof that there are $\frak c$ Borel sets).
In the comments, bof has asked if we replace countable intersection with taking subsets, if such a magical nullset exists, and I believe the answer is still no, although there are still some issues with the proof. I'll condition on the result that for "most" $a\in[0,1]$ (it suffices to be positive measure, but I think there are countably many exceptions), $aC+b$ has measure zero in the cantor measure on $C$ (where $C$ is the $[0,1]$-Cantor set).
The trick is that these scaled Cantor sets are "unreachable" from each other; no amount of countable unions will make one big in the measure on the other. Given $N$, there are at most a nullset of $a\in[0,1]$ such that there is a $b$ with $N+b$ having measure $>0$ in the cantor measure on $aC$, because otherwise $N$ would have positive measure (not sure about justification here), and so there is an $a\in[0,1]$ such that $N+b$ has measure zero in the cantor measure on $aC$. Then any countable union of such sets and any subset of this is also measure zero in $aC$, and so $aC$ is a nullset which is missed by $N$. | {"set_name": "stack_exchange", "score": 3, "question_id": 1158430} |
\begin{document}
\maketitle
\begin{abstract}
We give a proof of the Andersen--Haboush identity
(cf. \cite{Ann}, \cite{Hab}) that implies Kempf's vanishing theorem. Our argument is based on the structure of derived categories of coherent sheaves on flag varieties over $\mathbb Z$.
\end{abstract}
\vspace{0.3cm}
\section{\bf Introduction}
\vspace*{0.3cm}
Let $\bf G$ be a split semisimple simply connected algebraic group over a perfect field $\sk$ of characteristic $p$. The weight $-\rho = - \sum \omega _i$, where $ \omega _i$ are the fundamental weights of $\bf G$, is known to play a fundamental r\^ole in representation theory of $\bf G$. For $q=p^n, n\geq 1$, the Steinberg weight $(q-1)\rho$ is equally important in representation theory of semisimple groups in defining characteristic. In particular, there is a remarkable property that the corresponding line bundle $\Ll _{(q-1)\rho}$ on the flag variety ${\bf G}/{\bf B}$ enjoys: its pushforward under the $n$--th iteration of Frobenius morphism is a trivial vector bundle whose space of global sections is canonically identified with the Steinberg representation ${\sf St}_q$:
\begin{equation}\label{eq:Steinberg}
{\sf F}^n_{\ast}\Ll _{(q-1)\rho} = {\sf St}_q\otimes \Oo _{{\bf G}/{\bf B}}.
\end{equation}
\vspace*{0.2cm}
This was proven independently and at around the same time by Andersen in \cite{Ann} and by Haboush in \cite{Hab}. Back to the weight $-\rho$, isomorphism of vector bundles (\ref{eq:Steinberg}) is equivalent to saying that the line bundle $\Ll _{-\rho}$ is an "eigenvector" with respect to Frobenius morphism, i.e. ${\sf F}^n_{\ast}\Ll _{-\rho} = {\sf St}_q\otimes \Ll _{-\rho}$. This fact has many important consequences for representation theory of algebraic groups in characteristic $p$: in particular, the Kempf vanishing theorem \cite{Kem} easily follows from it (see \cite{Ann} and \cite{Hab}). The proofs of {\it loc.cit.} were essentially representation--theoretic. The goal of this note is to
prove isomorphism (\ref{eq:Steinberg}) using the structure of the derived category of coherent sheaves on the flag variety ${\bf G}/{\bf B}$. In a nutshell, the idea is as follows.
Given a smooth algebraic variety $X$ and a semiorthogonal decomposition $\langle \sf D_0,\sf D_1\rangle$ of the derived category $\Dd ^b(X)$ (see Section \ref{sec:Prelim} for the details), any object of $\Dd ^b(X)$ -- in particular, any vector bundle $\Ff$ on $X$, can be decomposed with respect to $\sf D_0$ and $\sf D_1$. Thus, if $\Ff$ is right orthogonal to $\sf D_1$, i.e. $\Hom _{\Dd ^b(X)}^{\cdot}(\sf D _1,\Ff)=0$, it automatically belongs to $\sf D_0$. It turns out that for a semiorthogonal decomposition of the derived category $\Dd ^b({\bf G}/{\bf B})$ into two pieces, one of which is the admissible subcategory $\langle \Ll _{-\rho}\rangle$ generated by the single line bundle $\Ll _{-\rho}$ and the other one being its left orthogonal $^{\perp}\langle \Ll _{-\rho}\rangle$, the bundle ${\sf F}^n_{\ast}\Ll _{-\rho}$ is right orthogonal to $^{\perp}\langle \Ll _{-\rho}\rangle$. Therefore, it should belong to the subcategory $\langle \Ll _{-\rho}\rangle$. Being generated by a single exceptional bundle, the latter subcategory is equivalent to the derived category of vector spaces over $\sk$; thus, one has ${\sf F}^n_{\ast}\Ll _{-\rho}=\Ll _{-\rho}\otimes {\sf V}$ for some graded vector space $\sf V$. Since the left hand side of this isomorphism is a vector bundle, i.e. a pure object of $\Dd ^b({\bf G}/{\bf B})$, the graded vector space $\sf V$ should only have a non--trivial zero degree part, which is a vector space of dimension $q^{\rm dim({\bf G}/{\bf B})}$. Tensoring the both sides with $\Ll _{\rho}$ and taking the cohomology, one obtains an isomorphism ${\sf V} = {\rm H}^0({\bf G}/{\bf B},\Ll _{(q-1)\rho})={\sf St}_q$, and hence isomorphism (\ref{eq:Steinberg}).
Unfolding this argument takes the rest of the note.
The key step consists of proving a special property of the semiorthogonal decomposition described above that allows to easily check the orthogonality properties for the bundle ${\sf F}^n_{\ast}\Ll _{-\rho}$.
This is done in Section \ref{sec:main}. Theorem \ref{th:Haboushth}, which is equivalent to isomorphism (\ref{eq:Steinberg}), immediately follows from it.
The present note was initially motivated by the author's computations of Frobenius pushforwards of homogeneous vector bundles on flag varieties. The derived localization theorem of \cite{BMR} implies, in particular, that for a regular weight $\chi$ (that is, for a weight having trivial stabilizer with respect to the dot--action of the (affine) Weyl group)
the bundle ${\sf F}_{\ast}\Ll _{\chi}$ is a generator in the derived category $\Dd ^b({\bf G}/{\bf B})$; in other words, there are sufficiently many indecomposable summands of
${\sf F}_{\ast}\Ll _{\chi}$ to generate the whole derived category $\Dd ^b({\bf G}/{\bf B})$. Knowing indecomposable summands of these bundles (e.g., for $p$--restricted weights) may clarify, in particular, cohomology vanishing patterns of line bundles on ${\bf G}/{\bf B}$. On the contrary, the weight $-\rho$ being the most singular, the thick category generated by the bundle ${\sf F}_{\ast}\Ll _{-\rho}$ "collapses" to the subcategory generated by the single line bundle $\Ll _{-\rho}$, which is encoded in isomorphism (\ref{eq:Steinberg}).
\subsection*{\bf Acknowledgements}
We are indebted to Roman Bezrukavnikov, Michel Brion, Jim Humphreys, Nicolas Perrin, and Alexander Polishchuk for their advice and valuable suggestions, and to Michel Van den Bergh for his interest in this work. The author gratefully acknowledges support from the strategic research fund of the Heinrich-Heine-Universit\"at D\"usseldorf (grant SFF F-2015/946-8).
We would also like to thank the ICTP, Trieste for providing excellent working facilities and the Deutsche Bahn for a truly creative atmosphere on their lonely night IC trains in the Rheinland-Pfalz region.
\subsection*{Notation}
Given a split semisimple simply connected algebraic group $\bf G$ over a perfect field $k$, let $\bf T$ denote a maximal torus of $\bf G$, and let ${\bf T}\subset {\bf B}$ be a Borel subgroup containing $\bf T$. The flag variety of Borel subgroups in $\bf G$ is denoted ${\bf G/B}$. Denote ${\rm X}({\bf T})$ the weight lattice, and let $\rm R$ and $\rm R ^{\vee}$ denote the root and coroot lattices, respectively. The Weyl group ${\mathcal W}={\rm N}({\bf T})/{\bf T}$ acts on $X({\bf T})$ via the dot--action: if $w\in {\mathcal W}$, and
$\lambda \in {\rm X}({\bf T})$, then $w\cdot \lambda = w(\lambda + \rho) - \rho$, where $\rho$ is the sum of fundamental weights. Let $\rm S$ be the set of simple roots relative to the choice of a Borel subgroup than contains $\bf T$. A parabolic subgroup of $\bf G$ is usually denoted by $\bf P$; in particular, for a simple root $\alpha \in \rm S$, denote ${\bf P}_{\alpha}$ the minimal parabolic subgroup of $\bf G$ associated to $\alpha$.
Given a weight $\lambda \in {\rm X}({\bf T})$, denote $\Ll _{\lambda}$ the corresponding line bundle on ${\bf G}/{\bf B}$. Given a morphism $f:X\rightarrow Y$ between two schemes, we write $f_{\ast},f^{\ast}$ for the corresponding derived functors of push--forwards and pull--backs.
\vspace*{0.3cm}
\section{\bf Some preliminaries}\label{sec:Prelim}
\vspace*{0.3cm}
\subsection{\bf Flag varieties of Chevalley groups over ${\mathbb Z}$}
Let ${\mathbb G}\rightarrow \mathbb Z$ be a semisimple Chevalley group scheme (a smooth affine group scheme over ${\rm Spec}(\mathbb Z)$ whose geometric fibres are connected semisimple algebraic groups), and ${\mathbb G}/{\mathbb B}\rightarrow \mathbb Z$ be the corresponding Chevalley flag scheme (resp.,
the corresponding parabolic subgroup scheme ${\mathbb G}/{\mathbb P}\rightarrow \mathbb Z$ for a standard parabolic subgroup scheme ${\mathbb P}\subset \mathbb G$ over ${\mathbb Z}$). Then ${\mathbb G/\mathbb P}\rightarrow {\rm Spec}({\mathbb Z})$ is flat and any line bundle $\Ll$ on ${\bf G/P}$ also comes from a line bundle $\mathbb L$ on ${\mathbb G/\mathbb P}$. Let $\sk$ be a field of arbitrary characteristic, and ${\bf G/B}\rightarrow {\rm Spec}(\sk)$ be the flag variety obtained by base change along ${\rm Spec}(\sk)\rightarrow {\rm Spec}(\mathbb Z)$.
\vspace*{0.1cm}
\subsection{\bf Cohomology of line bundles on flag varieties}\label{subsec:cohlinbunflags}
We recall first the classical Bott's theorem (see \cite{Dem}). Let ${\mathbb G}\rightarrow \mathbb Z$ be a semisimple Chevalley group scheme as above. Assume given a weight $\chi \in X({\mathbb T})$, and let $\Ll _{\chi}$ be the corresponding line bundle on ${\mathbb G}/{\mathbb B}$. The weight $\chi$ is called {\it singular}, if it lies on a wall of some Weyl chamber defined by $\langle -, \alpha ^{\vee}\rangle =0$ for some coroot $\alpha ^{\vee}\in {\rm R}^{\vee}$. Weights, which are not singular, are called {\it regular}. A weight $\chi$ such that $\langle \chi ,\alpha ^{\vee}\rangle \geq 0$ for all simple coroots $\alpha ^{\vee}$ is called {\it dominant}. Let $\sk$ be a field of characteristic zero, and ${\bf G}/{\bf B}\rightarrow {\rm Spec}(\sk)$ the corresponding flag variety over $\sk$. The weight $\chi \in X({\bf T})$ defines a line bundle $\Ll _{\chi}$ on
${\bf G}/{\bf B}$.
\vspace*{0.2cm}
\begin{theorem}\cite[Theorem 2]{Dem}\label{th:Bott-Demazure_th}
\vspace*{0.2cm}
\begin{itemize}
\vspace*{0.2cm}
\item[(a)] If $\chi +\rho$ is singular, then ${\rm H}^i({\bf G}/{\bf B},\Ll _{\chi})= 0$ for all $i$.
\vspace*{0.2cm}
\item[(b)] If If $\chi + \rho$ is regular and dominant, then ${\rm H}^i({\bf G}/{\bf B},\Ll _{\chi}) = 0$ for $i>0$.
\vspace*{0.2cm}
\item[(c)] If $\chi + \rho$ is regular, then ${\rm H}^i({\bf G}/{\bf B},\Ll _{\chi})\neq 0$ for the unique degree $i$, which is equal to $l(w)$. Here $l(w)$ is the length of an element $w$ of the Weyl group that takes $\chi$ to the dominant chamber, i.e. $w\cdot \chi \in X_{+}({\bf T})$. The cohomology group ${\rm H}^{l(w)}({\bf G}/{\bf B},\Ll _{\chi})$ is the irreducible $\bf G$--module of highest weight $w\cdot \chi$.
\end{itemize}
\end{theorem}
\vspace*{0.2cm}
\begin{remark}\label{rem:Demazure_bits_over_Z}
{\rm Some bits of Theorem \ref{th:Bott-Demazure_th} are still true over $\mathbb Z$: if a weight
$\chi$ is such that $\langle \chi + \rho, \alpha ^{\vee}\rangle =0$ for some simple root $\alpha$, then the corresponding line bundle is acyclic. Indeed, Lemma from
\cite[Section 2]{Dem} holds over fields of arbitrary characteristic. Besides this, however, very little of Theorem \ref{th:Bott-Demazure_th} holds over $\mathbb Z$ (see \cite[Part II, Chapter 5]{Jan}).}
\end{remark}
\vspace*{0.1cm}
From now on, unless specified otherwise, the base field $\sf k$ is assumed to be a perfect field of characteristic $p>0$.
\subsection{\bf Kempf's vanishing theorem}\label{subsec:Kempf_vanishing}
Kempf's vanishing theorem, originally proven by Kempf in \cite{Kem}, and subsequently by Andersen \cite{Ann} and Haboush \cite{Hab} with shorter representation--theoretic proofs (see also \cite[Part II, Chapter 4]{Jan}), states that given a dominant weight $\chi \in X(\bf T)$, the cohomology groups ${\rm H}^i({\bf G}/{\bf B},\Ll _{\chi})$ vanish in positive degrees, i.e. ${\rm H}^i({\bf G}/{\bf B},\Ll _{\chi})=0$ for $i>0$. This theorem is ubiquitous in representation theory of algebraic groups in characteristic $p$.
For convenience of the reader, we briefly recall how it can be obtained from the main isomorphism ${\sf F}^n_{\ast}\Ll _{(q-1)\rho} = {\sf St}_q\otimes \Oo _{{\bf G}/{\bf B}}$ (recall that $q=p^n$ for $n\in \mathbb N$). From $\langle \chi ,\alpha ^{\vee}\rangle \geq 0$ one obtains $\langle \chi + \rho ,\alpha ^{\vee}\rangle > 0$ for all simple coroots $\alpha ^{\vee}$. By \cite[Part II, Proposition 4.4]{Jan}, the line bundle $\Ll _{\chi +\rho}$ is ample on ${\bf G}/{\bf B}$. Consider the weight $q(\chi + \rho) - \rho = q\chi + (q-1)\rho$. Since $\Ll _{\chi +\rho}$ is ample, one can choose $n\in \mathbb N$ large enough so that the line bundle $\Ll _{q(\chi + \rho)}$ be very ample. From the well--known properties of the Frobenius morphism it then follows
\vspace*{0.2cm}
\begin{equation}
{\rm H}^i({\bf G}/{\bf B},\Ll _{q\chi + (q-1)\rho})={\rm H}^i({\bf G}/{\bf B},\Ll _{\chi}\otimes {\sf F}^n_{\ast}\Ll _{(q-1)\rho})={\rm H}^i({\bf G}/{\bf B},\Ll _{\chi})\otimes {\sf St}_q.
\end{equation}
\vspace*{0.3cm}
Now the left hand side group vanishes for $i>0$ by Serre's vanishing, the line bundle $\Ll _{q(\chi + \rho)}$ being very ample. Hence, ${\rm H}^i({\bf G}/{\bf B},\Ll _{\chi})=0$ for $i>0$ as well.
\subsection{Derived categories of coherent sheaves}\label{subsec:dercatcohsheaves}
\vspace*{0.1cm}
The content of this section can be found, e.g., in \cite[Section 1.2, 1.4]{Huyb}.
Let $\sk$ be a field. Assume given a $\sk$--linear triangulated category ${\sf D}$, equipped with a shift functor $[1]\colon {\sf D}\rightarrow {\sf D}$. For two
objects $A, B \in {\sf D}$ let $\Hom ^{\bullet}_{\sf D}(A,B)$ be
the graded $\sk$-vector space $\oplus _{i\in \mathbb Z}\Hom _{\sf
D}(A,B[i])$.
Let ${\sf A}\subset {\sf D}$ be a full triangulated subcategory,
that is a full subcategory of ${\sf D}$ which is closed under shifts and forming distinguished triangles.
\begin{definition}\label{def:orthogonalcat}
The right orthogonal ${\sf A}^{\perp}\subset \sf D$ is defined to be
the full subcategory
\vspace*{0.2cm}
\begin{equation}
{\sf A}^{\perp} = \{B \in {\sf D}\colon \Hom _{\sf D}(A,B) = 0 \}
\end{equation}
\vspace*{0.2cm}
\noindent for all $A \in {\sf A}$. The left orthogonal $^{\perp}{\sf
A}$ is defined similarly.
\end{definition}
\begin{definition}\label{def:admissible}
A full triangulated subcategory ${\sf A}$ of ${\sf D}$ is called
{\it right admissible} if the inclusion functor ${\sf A}\hookrightarrow {\sf
D}$ has a right adjoint. Similarly, ${\sf A}$ is called {\it left
admissible} if the inclusion functor has a left adjoint. Finally,
${\sf A}$ is {\it admissible} if it is both right and
left admissible.
\end{definition}
If a full triangulated category ${\sf A}\subset {\sf D}$ is right admissible then every object $X\in {\sf D}$ fits into a distinguished triangle
\vspace*{0.2cm}
\begin{equation}
\dots \longrightarrow Y\longrightarrow X\longrightarrow Z\longrightarrow Y[1]\rightarrow \dots
\end{equation}
\vspace*{0.2cm}
\noindent with $Y\in {\sf A}$ and $Z\in {\sf A}^{\perp}$. One then
says that there is a semiorthogonal decomposition of ${\sf D}$ into
the subcategories $({\sf A}^{\perp}, \ {\sf A})$. More generally,
assume given a sequence of full triangulated subcategories ${\sf
A}_1,\dots,{\sf A}_n \subset {\sf D}$. Denote $\langle {\sf
A}_1,\dots,{\sf A}_n\rangle$ the triangulated subcategory of ${\sf
D}$ generated by ${\sf A}_1,\dots,{\sf A}_n$.
\begin{definition}\label{def:semdecomposition}
A sequence $({\sf A}_1,\dots,{\sf A}_n)$ of admissible subcategories of
${\sf D}$ is called {\it semiorthogonal} if ${\sf
A}_i\subset {\sf A}_j^{\perp}$ for $1\leq i < j\leq n$,
and ${\sf A}_i\subset {^{\perp}{\sf A}_j}$ for $1\leq j < i\leq n$.
The sequence $({\sf A}_1,\dots,{\sf A}_n)$ is called a {\it semiorthogonal
decomposition} of ${\sf D}$ if $\langle {\sf A}_1, \dots, {\sf A}_n
\rangle^{\perp} = 0$, that is ${\sf D} = \langle {\sf A}_1,\dots,{\sf A}_n\rangle$.
\end{definition}
\vspace{0.2cm}
\begin{lemma}\label{lem:admiss_orthogonal}
For a semi--orthogonal decomposition ${\sf D}= \langle {\sf A} ,{\sf B}\rangle$, the subcategory ${\sf A}$ is left admissible and the subcategory ${\sf B}$ is right admissible. Conversely, if ${\sf A}\subset \sf D$ is left (resp. right) admissible, then there is a semi--orthogonal decomposition ${\sf D}=\langle {\sf A}, ^{\perp}{\sf A}\rangle$ (resp. ${\sf D}=\langle {\sf A}^{\perp}, {\sf A}\rangle$).
\end{lemma}
\begin{definition}\label{def:exceptcollection}
An object $E \in \sf D$ of a $\sk$--linear triangulated category $\sf D$ is said to be exceptional if there is an isomorphism of graded $\sk$-algebras
\vspace{0.2cm}
\begin{equation}
\Hom _{\sf D}^{\bullet}(E,E) = \sk.
\end{equation}
\vspace{0.2cm}
A collection of exceptional objects $(E_0,\dots,E_n)$ in $\sf D$ is called
exceptional if for $1 \leq i < j \leq n$ one has
\vspace{0.2cm}
\begin{equation}
\Hom _{\sf D}^{\bullet}(E_j,E_i) = 0.
\end{equation}
\vspace{0.2cm}
\end{definition}
Denote by $\langle E_0,\dots,E_n \rangle \subset {\sf D}$ the full
triangulated subcategory generated by the exceptional objects $E_0,\dots,E_n$. One
proves \cite[Lemma 1.58]{Huyb} that such a category is admissible. \\
Given a smooth algebraic variety $X$ over a field $\sk$, denote $\Dd ^b(X)$ the bounded derived category of coherent sheaves, and let $\Dd ({\rm QCoh}(X))$ denote
the unbounded derived category of quasi--coherent sheaves. These are $\sk$--linear triangulated categories. Let $\Ee$ be a vector bundle of rank $r$ on $X$, and consider the associated projective bundle $\pi : \Pp (\Ee)\rightarrow X$. Denote $\Oo _{\pi}(-1)$ the line bundle on $\Pp (\Ee)$ of relative degree $-1$, such that $\pi _{\ast}\Oo _{\pi}(1)=\Ee ^{\ast}$.
One has \cite[Corollary 8.36]{Huyb}:
\vspace{0.2cm}
\begin{theorem}\label{th:Orvlovth}
The category $\Dd ^b(\Pp (\Ee))$ has a semiorthogonal decomposition:
\vspace{0.1cm}
\begin{equation}
\Dd ^b(\Pp (\Ee)) = \langle \pi ^{\ast}\Dd ^b(X)\otimes \Oo _{\pi}(-r+1),\dots , \pi ^{\ast}\Dd ^b(X)\otimes \Oo _{\pi}(-1),\pi ^{\ast}\Dd ^b(X)\rangle .
\end{equation}
\vspace{0.1cm}
\end{theorem}
\vspace{0.2cm}
We also need some basic facts about generators in triangulated categories (see \cite{Neem}).
\begin{definition}
Let $\sf D$ be a $\sk$--linear triangulated category. An object $C$ of $\sf D$ is called compact if for any coproduct of objects one has
$\Hom _{\sf D}(C,\coprod _{\lambda \in \Lambda} X_{\lambda}) = \coprod _{\lambda \in \Lambda}\Hom _{\sf D}(C,X)$.
\end{definition}
\begin{definition}
A $\sk$--linear triangulated category $\sf D$ is called compactly generated if $\sf D$ contains small coproducts, and there exists a small set $\sf T$ of compact objects of $\sf D$, such that $\Hom _{\sf D}({\sf T},X) = 0$ implies $X = 0$. In other words, if $X$ is an object of $\sf D$, and for every $T\in \sf T$ one has $\Hom _{\sf D}(T,X) = 0$, then $X$ must be the zero object.
\end{definition}
\begin{definition}
Let $\sf D$ be a compactly generated triangulated category. A set $\sf T$
of compact objects of $\sf D$ is called a generating set if $\Hom _{\sf D}({\sf T},X)=0$ implies $X=0$ and $\sf T$ is closed under the shift functor, i.e. ${\sf T} = {\sf T}[1]$.
\end{definition}
\begin{definition}
Let $X$ be a quasi-compact, separated scheme. An object $C\in \Dd ({\rm QCoh}(X))$ is called perfect if, locally on $X$, it is isomorphic to a bounded complex of locally free sheaves of finite type.
\end{definition}
\begin{proposition}{\cite[Example 1.10]{Neem}}\label{prop:ample_line_bundle_gen_set}
Let $X$ be a quasi--compact, separated scheme, and $\Ll$ be an ample line bundle on $X$. Then the set $\langle \Ll ^{\otimes m}[n]\rangle, m,n\in \mathbb Z$ is a generating set for $\Dd ({\rm QCoh}(X))$.
\end{proposition}
Finally, recall that given two smooth varieties $X$ and $Y$ over $\sk$, an object
$\mathcal P \in \Dd ^b(X\times Y)$ gives rise to an integral transform $\Phi _{\mathcal P}(-): = {\pi _Y}_{\ast}(\pi _{X}^{\ast}(-)\otimes \mathcal P)$ between
$\Dd ^b(X)$ and $\Dd ^b(Y)$, where $\pi _{X}, \pi _{Y}$ are the projections of $X\times Y$ onto corresponding factors.
\begin{proposition}{\cite[Proposition 5.1]{Huyb}}\label{prop:F-M_composition}
Let $X, Y,$ and $Z$ be smooth projective varieties over a field $\sk$. Consider objects $\mathcal P\in \Dd ^b(X\times Y)$ and $\mathcal Q\in \Dd ^b(Y\times Z)$. Define the object $\mathcal R\in \Dd ^b(X\times Z)$ by the formula
${\pi _{XZ}}_{\ast}(\pi _{XY}^{\ast}\mathcal P\otimes \pi _{YZ}^{\ast}\mathcal Q)$, where $\pi _{XZ}, \pi _{XY}$, and $\pi _{YZ}$ are the projections from
$X\times Y\times Z$ to $X\times Z$ (resp., to $X\times Y$, resp., to $Y\times Z$).
Then the composition $\Phi _{\mathcal Q}\circ \Phi _{\mathcal P}: \Dd ^b(X)\rightarrow \Dd ^b(Z)$ is isomorphic to the integral transform $\Phi _{\mathcal R}$.
\end{proposition}
\vspace{0.3cm}
\section{\bf Semiorthogonal decompositions for flag varieties}\label{sec:main}
\vspace*{0.3cm}
In order to prove Lemma \ref{lem:mainlemma} below, the key statement of this section, we need an auxiliary proposition which is a derived category counterpart of the main theorem of \cite{CPS}.
\begin{proposition}\label{prop:derverDemazure_char_for}
Let $\pi : {\bf G}/{\bf B}\rightarrow {\rm Spec}(\sk)$ be the structure morphism, and for a simple root $\alpha _i$ denote $\pi _{\alpha _i}: {\bf G}/{\bf B}\rightarrow {\bf G}/{\bf P}_{\alpha _i}$ the projection, a $\Pp ^1$--bundle over ${\bf G}/{\bf P}_{\alpha _i}$. Let $w_0$ be the longest element of $\mathcal W$, and let $s_{\alpha _1}s_{\alpha _2}\dots \cdot s_{\alpha _N}$ be a reduced expression of $w_0$. Then there is an isomorphism of functors:
\vspace*{0.2cm}
\begin{equation}\label{eq:WCF_dercat}
\pi ^{\ast}\pi _{\ast} = \pi _{\alpha _N}^{\ast}{\pi _{\alpha _N}}_{\ast}\pi _{\alpha _{N-1}}^{\ast}{\pi _{\alpha _{N-1}}}_{\ast}\dots \pi _{\alpha _1}^{\ast}{\pi _{\alpha _1}}_{\ast} .
\end{equation}
\vspace*{0.2cm}
\end{proposition}
\begin{proof}
Denote $\mathcal Z$ the fibered product ${\bf G}/{\bf B}\times _{{\bf G}/{\bf P}_{\alpha _1}}{\bf G}/{\bf B}\times \dots \times _{{\bf G}/{\bf P}_{\alpha _N}}{\bf G}/{\bf B}$, and let $p:\mathcal Z\rightarrow {\bf G}/{\bf B}\times {\bf G}/{\bf B}$ denote the projection onto the two extreme factors. Then, by Proposition \ref{prop:F-M_composition}, the functor in the right hand side of (\ref{eq:WCF_dercat}) is given by an integral transform whose kernel is isomorphic to the direct image $p_{\ast}\Oo _{\mathcal Z}\in \Dd ^b({\bf G}/{\bf B}\times {\bf G}/{\bf B})$. Observe that $\mathcal Z$ is isomorphic to ${\bf G}\times _{\bf B}Z_{w_0}$, where $Z_{w_0}: = {\bf P}_{\alpha _1}\times \dots \times {\bf P}_{\alpha _N}/{\bf B}^N$ is the Demazure variety corresponding to the reduced expression of $w_0$ as above. Indeed, by definition of these varieties \cite[Definition 2.2.1]{BK} and [Diagram ($\mathcal D$), p.66] of {\it loc.cit.}, one has an isomorphism
$({\bf G}\times _{\bf B}Z_{w_0s_{\alpha _N}})\times _{{\bf G}/{\bf P}_{\alpha _N}}{\bf G}/{\bf B}= {\bf G}\times _{\bf B}(Z_{w_0s_{\alpha _N}}\times _{{\bf G}/{\bf P}_{\alpha _N}}{\bf G}/{\bf B})={\bf G}\times _{\bf B}Z_{w_0}$.
The projection $p$ maps ${\mathcal Z}={\bf G}\times _{\bf B}Z_{w_0}$ onto ${\bf G}/{\bf B}\times {\bf G}/{\bf B}$. Consider the base change of $p$ along the quotient morphism $q: {\bf G}\times {\bf G}/{\bf B}\rightarrow {\bf G}/{\bf B}\times {\bf G}/{\bf B}$: one obtains the projection $p':{\bf G}\times Z_{w_0}\rightarrow {\bf G}\times {\bf G}/{\bf B}$ that factors as ${\rm id}\times d$, where $d: Z_{w_0}\rightarrow {\bf G}/{\bf B}$ is the projection. By Proposition \ref{prop:Demazure_vanishing} below, $p'_{\ast}\Oo _{{\bf G}\times Z_{w_0}}=\Oo _{{\bf G}\times {\bf G}/{\bf B}}$. Since the quotient morphism ${\bf G}\rightarrow {\bf G}/{\bf B}$ is flat, by flat base change one obtains $q^{\ast}p_{\ast}\Oo _{\mathcal Z}=p'_{\ast}\Oo _{{\bf G}\times Z_{w_0}}=\Oo _{{\bf G}\times {\bf G}/{\bf B}}$. Applying $q_{\ast}$ to $q^{\ast}p_{\ast}\Oo _{\mathcal Z}$ and using the projection formula, one arrives at an isomorphism $q_{\ast}(q^{\ast}p_{\ast}\Oo _{\mathcal Z})=p_{\ast}\Oo _{\mathcal Z}\otimes q_{\ast}\Oo _{{\bf G}\times {\bf G}/{\bf B}}=q_{\ast}\Oo _{{\bf G}\times {\bf G}/{\bf B}}$. It follows that $p_{\ast}\Oo _{\mathcal Z}$ is an invertible sheaf on ${\bf G}/{\bf B}\times {\bf G}/{\bf B}$ isomorphic to $\Ll \boxtimes \Oo _{{\bf G}/{\bf B}}$ for some line bundle $\Ll$ on ${\bf G}/{\bf B}$. Applying the integral transform $\Phi _{w_0}=\Phi _{w_0^{-1}}=\Phi _{p_{\ast}\Oo _{\mathcal Z}}$ to $\Oo _{{\bf G}/{\bf B}}$, one obtains $\Phi _{w_0^{-1}}(\Oo _{{\bf G}/{\bf B}})=\pi _{\alpha _1}^{\ast}{\pi _{\alpha _1}}_{\ast}\pi _{\alpha _{2}}^{\ast}{\pi _{\alpha _{2}}}_{\ast}\dots \pi _{\alpha _N}^{\ast}{\pi _{\alpha _N}}_{\ast}(\Oo _{{\bf G}/{\bf B}}) =
\Oo _{{\bf G}/{\bf B}}=\Phi _{\Oo _{{\bf G}/{\bf B}}\boxtimes \Ll}(\Oo _{{\bf G}/{\bf B}})=\pi _{\ast}\Oo _{\bf G/B}\otimes
\Ll ={\mathbb H}^{\cdot}({\bf G/B},\Oo _{\bf G/B})\otimes \Ll=\Ll$, where the last isomorphism follows from Corollary \ref{cor:admisscor} below. Therefore, $p_{\ast}\Oo _{\mathcal Z} = \Oo _{{\bf G}/{\bf B}\times {\bf G}/{\bf B}}$. Finally, by flat base change for the morphism $\pi : {\bf G}/{\bf B}\rightarrow {\rm Spec}(\sk)$ along itself, the integral transform $\Phi _{\Oo _{{\bf G}/{\bf B}\times {\bf G}/{\bf B}}}$ is isomorphic to $\pi ^{\ast}\pi _{\ast}$.
\end{proof}
\begin{proposition}\label{prop:Demazure_vanishing}
Let $d: Z_{w_0}\rightarrow {\bf G}/{\bf B}$ be the projection map as above. Then $d_{\ast}\Oo _{Z_{w_0}}=\Oo _{{\bf G}/{\bf B}}$.
\end{proposition}
\begin{proof}
Given an element $w\in \mathcal W$, denote the corresponding Schubert variety by $X_w$, and let $d_w: Z_w= {\bf P}_{\alpha _1}\times \dots \times {\bf P}_{\alpha _n}/{\bf B}^n\rightarrow X_w$ denote the Demazure desingularization. Then $d=d_{w_0}: Z_{w_0}\rightarrow {\bf G}/{\bf B}$ is a birational morphism onto ${\bf G}/{\bf B}$, since $X_{w_0}={\bf G}/{\bf B}$. The flag variety being smooth, hence normal, by Zariski's main theorem one has ${\rm R}^0d_{\ast}\Oo _{Z_{w_0}}=\Oo _{{\bf G}/{\bf B}}$. To prove the vanishing of higher direct images ${\rm R}^id_{\ast}\Oo _{Z_{w_0}}$ for $i>0$, one can argue as in the \cite[Theorem 3.3.4, (b)]{BK}. More specifically,
one argues by induction on the length $l(w)$ of an element $w\in \mathcal W$ to prove that ${\rm R}^i{d_w}_{\ast}\Oo _{Z_w}=0$ for $i\geq 1$; if $l(w)=1$ then
$d_w$ is an isomorphism. Given a reduced expression $s_{\alpha _1}\dots s_{\alpha _n}$ of $w$, set $v=s_{\alpha _2}\dots s_{\alpha _n}$, and
consider the factorization of $d_w$ as $d _{\alpha _1v}: Z_w={\bf P}_{\alpha _1}\times _{\bf B}Z_{v}\rightarrow {\bf P}_{\alpha _1}\times _{\bf B}X_{v}$ followed by the product morphism $f: {\bf P}_{\alpha _1}\times _{\bf B}X_{v}\rightarrow X_{w}$. Then, by induction one obtains ${\rm R}^i{d_v}_{\ast}\Oo _{Z_v}=0$ for $i\geq 1$ which implies ${{\rm R}^id _{\alpha _1v}}_{\ast}\Oo _{Z_w}=0$ for $i\geq 1$ as well. Finally, \cite[Proposition 3.2.1, (b)]{BK} implies that for the product morphism $f: {\bf P}\times _{\bf B}X_v\rightarrow {\bf P}X_v$, where $\bf P$ is the minimal parabolic subgroup corresponding to a simple reflection,
the higher direct images ${\rm R}^if_{\ast}\Oo _{{\bf P}\times _{\bf B}X_v}$ are trivial for $i\geq 1$. Hence, for the composed morphism $d_w$ the higher direct images ${\rm R}^i{d _{w}}_{\ast}\Oo _{Z_w}=0$ are trivial for $i\geq 1$.
\end{proof}
\begin{corollary}\label{cor:admisscor}
One has ${\rm H}^i({\bf G/B},\Oo _{\bf G/B})=0$ for $i>0$. Given a line bundle $\Ll$ on ${\bf G}/{\bf B}$, the triangulated subcategory $\langle \Ll\rangle$ of $\Dd ^b(\bf G/B)$ generated by $\Ll$ is admissible.
\end{corollary}
\begin{proof}
By Proposition \ref{prop:Demazure_vanishing}, one has $d_{\ast}\Oo _{X_{w_0}}=\Oo _{\bf G/B}$. On the other hand, by its construction, the variety $X_{w_0}$ is isomorphic to an iterated sequence of $\Pp ^1$--bundles over a point; hence, ${\rm H}^i(X_{w_0},\Oo _{X_{w_0}})=0$ for $i>0$. Therefore, ${\rm H}^i({\bf G/B},\Oo _{\bf G/B})={\rm H}^i({\bf G/B},d_{\ast}\Oo _{\mathcal X})={\rm H}^i(X_{w_0},\Oo _{X_{w_0}})=0$ for $i>0$.
It follows from Section \ref{subsec:dercatcohsheaves} that the category $\langle \Ll\rangle$ is admissible once the bundle $\Ll$ is exceptional, i.e. $\Hom _{\bf G/B}^{\bullet}(\Ll ,\Ll)=k$. The latter condition is equivalent to ${\rm H}^i({\bf G/B},\Oo _{\bf G/B})=0$ for $i>0$.
\end{proof}
\begin{lemma}\label{lem:mainlemma}
Consider the semiorthogonal decomposition of $\Dd ^b(\bf G/B) = \langle \langle \Oo _{{\bf G}/{\bf B}}\rangle ^{\perp},\langle \Oo _{{\bf G}/{\bf B}}\rangle \rangle$. Then the subcategory $\langle \Oo _{{\bf G}/{\bf B}}\rangle^{\perp}\subset \Dd ^b(\bf G/B)$ is generated, as an admissible triangulated subcategory of $\Dd ^b({\bf G}/{\bf B})$, by acyclic line bundles $\Ll _{\chi}$ with the following property: there exists a simple coroot $\alpha ^{\vee}\in {\rm R}^{\vee}$, such that $\langle \chi + \rho, \alpha ^{\vee}\rangle =0$.
\end{lemma}
\begin{remark}
{\rm The generating set of the subcategory $\langle \Oo _{{\bf G}/{\bf B}}\rangle ^{\perp}$ in Lemma \ref{lem:mainlemma} is not at all minimal.
}
\end{remark}
\begin{proof}
By Corollary \ref{cor:admisscor}, the category $\langle \Oo _{{\bf G}/{\bf B}}\rangle$ is admissible, hence its right orthogonal is an admissible subcategory of $\Dd ^b(\bf G/B)$.\\
Given a simple root $\alpha$, consider the corresponding minimal parabolic subgroup ${\bf P}_{\alpha}$, and let $\pi _{\alpha}:{\bf G}/{\bf B}\rightarrow {\bf G}/{\bf P}_{\alpha}$ denote the projection.
Observe first that $\langle \Oo _{{\bf G}/{\bf B}}\rangle ^{\perp}$ contains the subcategory generated by $\pi _{\alpha}^{\ast}{\pi _{\alpha}}_{\ast}\Ff \otimes \Ll _{-\rho}$, where $\Ff \in \Dd ^b({\bf G}/{\bf B})$. Indeed,
\begin{equation}
\Hom _{{\bf G}/{\bf B}}^{\bullet}(\Oo _{{\bf G}/{\bf B}},\pi _{\alpha}^{\ast}{\pi _{\alpha}}_{\ast}\Ff \otimes \Ll _{-\rho}) = {\mathbb H}^{\ast}({\bf G}/{\bf P}_{\alpha},{\pi _{\alpha}}_{\ast}\Ff\otimes {\pi _{\alpha}}_{\ast}\Ll _{-\rho}) =0,
\end{equation}
\vspace{0.2cm}
as ${\pi _{\alpha}}_{\ast}\Ll _{-\rho}=0$. Let ${\mathcal C}\subset \Dd ^b({\bf G}/{\bf B})$ be the full triangulated category generated by $\pi _{\alpha}^{\ast}{\pi _{\alpha}}_{\ast}\Ff \otimes \Ll _{-\rho}$, where $\Ff \in \Dd ^b({\bf G}/{\bf P}_{\alpha})$ and $\alpha$ runs over the set of all the simple roots.
Observe next that ${\mathcal C}$ coincides with the triangulated subcategory generated by line bundles satisfying the condition of Lemma \ref{lem:mainlemma}: one the one hand, given a simple root $\alpha$, any line bundle $\Ll _{\chi}$ on ${\bf G}/{\bf P}_{\alpha}$ satisfies $\langle \chi,\alpha ^{\vee}\rangle =0$, and the projection functor ${\pi _{\alpha}}_{\ast}:\Dd ^b({\bf G}/{\bf B})\rightarrow
\Dd ^b({\bf G}/{\bf P}_{\alpha})$ is surjective. Hence, ${\mathcal C}\supset \langle \Ll _{\chi}\rangle$ with $\langle \chi + \rho, \alpha ^{\vee}\rangle =0$ for a simple coroot $\alpha ^{\vee}$. On the other hand, upon choosing an ample line bundle $\Ll$ on ${\bf G}/{\bf P}_{\alpha}$, the category $\Dd ^b({\bf G}/{\bf P}_{\alpha})$ is generated by the set $\langle \Ll ^{\otimes m}[n]\rangle, m,n\in \mathbb Z$ in virtue of Proposition \ref{prop:ample_line_bundle_gen_set}, and one obtains a converse inclusion ${\mathcal C}\subset \langle \Ll _{\chi}\rangle$ with
$\chi$ as in the statement of the lemma.\\
Consider its left orthogonal $^{\perp}{\mathcal C}\subset \Dd ^b({\bf G}/{\bf B})$.
By Lemma \ref{lem:admiss_orthogonal}, it is an admissible subcategory of $\Dd ^b({\bf G}/{\bf B})$. The same lemma implies that the statement of Lemma \ref{lem:mainlemma} is equivalent to saying that the category $^{\perp}{\mathcal C}$ is equivalent to $\langle \Oo _{{\bf G}/{\bf B}}\rangle$. To this end, observe that any object $\mathcal G$ of $^{\perp}{\mathcal C}$
belongs to $\pi _{\alpha}^{\ast}\Dd ^b({\bf G}/{\bf P}_{\alpha})$ for each simple root $\alpha$; in other words, $^{\perp}{\mathcal C}\subset \bigcap _{\alpha}\pi _{\alpha}^{\ast}\Dd ^b({\bf G}/{\bf P}_{\alpha})$. Indeed, by Serre duality
\vspace{0.2cm}
\begin{equation}
\Hom _{{\bf G}/{\bf B}}^{\bullet}(\mathcal G,\pi _{\alpha}^{\ast}{\pi _{\alpha}}_{\ast}\Ff \otimes \Ll _{-\rho}) = \Hom _{{\bf G}/{\bf B}}^{\bullet}({\pi _{\alpha}}_{\ast}\Ff,{\pi _{\alpha}}_{\ast} (\mathcal G\otimes \Ll _{-\rho})[{\rm dim}({\bf G}/{\bf B}])^{\ast}=0;
\end{equation}
\vspace{0.2cm}
since $\Ff$ is arbitrary and functor ${\pi _{\alpha}}_{\ast}$ is surjective, it follows that ${\pi _{\alpha}}_{\ast} (\mathcal G\otimes \Ll _{-\rho})=0$. By Theorem \ref{th:Orvlovth}, this implies $\mathcal G\in \pi _{\alpha }^{\ast}\Dd ^b({\bf G}/{\bf P}_{\alpha})$, the line bundle $\Ll _{-\rho}$ having degree $-1$ along $\pi _{\alpha}$.
Let $\sf M$ be an object $^{\perp}{\mathcal C}$. By the above, $\sf M = \pi ^{\ast}_{\alpha}{\pi _{\alpha}}_{\ast}\sf M$ for any simple root $\alpha$. By Proposition
\ref{prop:derverDemazure_char_for}, one has an isomorphism
$\pi ^{\ast}\pi _{\ast}{\sf M} = \pi _{\alpha _N}^{\ast}{\pi _{\alpha _N}}_{\ast}\pi _{\alpha _{N-1}}^{\ast}{\pi _{\alpha _{N-1}}}_{\ast}\dots \pi _{\alpha _1}^{\ast}{\pi _{\alpha _1}}_{\ast}{\sf M}={\sf M}$, hence ${\sf M}\in \langle \Oo _{{\bf G}/{\bf B}}\rangle$.
\end{proof}
\begin{corollary}\label{cor:maincorollary}
Consider the semiorthogonal decomposition of $\Dd ^b(\bf G/B) = \langle \langle \Ll _{-\rho}\rangle ,^{\perp} \langle \Ll _{-\rho}\rangle \rangle$. Then the category $^{\perp} \langle \Ll _{-\rho}\rangle$ is generated, as an admissible triangulated subcategory of $\Dd ^b({\bf G}/{\bf B})$, by the set of line bundles $\Ll _{\chi}$,
where $\langle \chi,\alpha ^{\vee}\rangle =0$ for some simple coroot $\alpha ^{\vee}\in {\rm R}^{\vee}$.
\end{corollary}
\begin{proof}
Let $\Ee \in $$^{\perp}\langle \Ll _{-\rho}\rangle$, then by Serre duality
\begin{eqnarray}
& \Hom _{{\bf G}/{\bf B}}^{\bullet}(\Ee,\Ll _{-\rho})= \Hom _{{\bf G}/{\bf B}}^{\bullet}(\Ll _{-\rho},\Ee \otimes \Ll _{-2\rho}[{\rm dim}({\bf G}/{\bf B}]))^{\ast} = \\
& \Hom _{{\bf G}/{\bf B}}^{\bullet}(\Oo _{{\bf G}/{\bf B}},\Ee \otimes \Ll _{-\rho}[{\rm dim}({\bf G}/{\bf B}]))^{\ast}=0.\nonumber
\end{eqnarray}
\vspace{0.2cm}
Therefore, up to a twist by the line bundle $\Ll _{\rho}$, the category $^{\perp} \langle \Ll _{-\rho}\rangle$ is equivalent to the subcategory $\langle \Oo _{{\bf G}/{\bf B}}\rangle ^{\perp}$. Lemma \ref{lem:mainlemma} implies the statement.
\end{proof}
\vspace*{0.3cm}
\section{\bf The Steinberg line bundle}
\vspace*{0.3cm}
Consdier the admissible subcategory $\langle \Ll _{-\rho}\rangle$ of $\Dd ^b(\bf G/B)$.
It follows from the above that the isomorphism ${\sf F}^n_{\ast}\Ll _{-\rho} = {\sf St}_q\otimes \Ll _{-\rho}$ is equivalent to the following statement:
\begin{theorem}\label{th:Haboushth}
One has ${\sf F}^n_{\ast}\Ll _{-\rho}\in \langle \Ll _{-\rho}\rangle$.
\end{theorem}
\begin{proof}
By Corollary \ref{cor:maincorollary}, the fact that the bundle ${\sf F}^n_{\ast}\Ll _{-\rho}$ belongs to the subcategory $\langle \Ll _{-\rho}\rangle\subset \Dd ^b(\bf G/B)$ is equivalent to saying that ${\sf F}^n_{\ast}\Ll _{-\rho}$ is right orthogonal to the subcategory
$\langle \Ll _{\chi}\rangle$ generated by all $\Ll _{\chi}$, where $\langle \chi,\alpha ^{\vee}\rangle =0$ for some simple coroot $\alpha ^{\vee}\subset {\rm R}^{\vee}$. In other words, one has to ensure that
\begin{equation}\label{eq:Steinbergisom}
\Hom ^{\bullet}_{\bf G/B}( \Ll _{\chi},{\sf F}^n_{\ast}\Ll _{-\rho})={\mathbb H}^{\ast}({\bf G/B},\Ll _{-p^n\chi - \rho})=0.
\end{equation}
\vspace{0.2cm}
By Remark \ref{rem:Demazure_bits_over_Z}, the line bundle $\Ll _{\mu}$ is acyclic if $\langle \mu +\rho,\alpha ^{\vee}\rangle = 0$ for some simple coroot $\alpha ^{\vee}$. Taking $\mu = -p^n\chi - \rho$, one obtains $\langle \mu +\rho,\alpha ^{\vee}\rangle =
\langle -p^n\chi - \rho +\rho,\alpha ^{\vee}\rangle = -p^n\langle \chi,\alpha ^{\vee}\rangle =0$. Hence, the bundle $\Ll _{-p^n\chi - \rho}$ is acyclic, and (\ref{eq:Steinbergisom}) holds.
\end{proof}
\vspace{0.3cm} | {"config": "arxiv", "file": "1611.10320.tex"} |
\section{Conclusion}
We have shown that alternating training~\cite{Aoudia2018EndtoEndLO} of \gls{ML}-based communication systems can be performed with noisy feedback, up to a certain \gls{MSE}, without performance loss.
We then proposed a feedback system able to learn the transmission of real numbers without channel model or preexisting feedback link.
This feedback system can be used in lieu of the perfect feedback link to perform alternating training.
Finally, evaluations show that the feedback system leads to identical performance compared to that achieved with a perfect feedback link, conditioned on a sufficiently high but realistic training \gls{SNR}. Moreover, our communication system outperforms both QPSK and a highly-optimized higher-order modulation scheme on an \gls{RBF} channel. | {"config": "arxiv", "file": "1810.05419/sections/conclu.tex"} |
TITLE: Convolution of half-circle with inverse
QUESTION [3 upvotes]: I am trying to compute the function:
$$f(\lambda)\equiv\int_{-1}^{1}\frac{\sqrt{1-x^2}}{\lambda-x}dx.$$
It arises as the convolution of the semi-circle density with the inverse function. When $\lambda\in(-1,1)$ it can only be defined as a Cauchy Principal Value.
I have a hunch that I need to go into the complex plane to solve this, but am not sure how to proceed. Any pointers would be much appreciated.
REPLY [3 votes]: I will demonstrate how to compute the Cauchy principal value of this integral using complex contour integration and Cauchy's theorem for all real values of $\lambda$:
Consider the following contour integral:
$$\oint_C dz \frac{\sqrt{z^2-1}}{\lambda-z} $$
where $C$ is the following contour for $|\lambda| \lt 1$:
We now evaluate the contour integral. While the following looks tedious, it holds the key to determining why the final solution will have different behavior depending on whether $\lambda$ is greater than or less than $1$. For the time being, we will assume that $|\lambda| \lt 1$. Also, we will assume that the outer circle has radius $R$ and that the small circular arcs have radius $\epsilon$.
$$\int_{AB} dz \frac{\sqrt{z^2-1}}{\lambda-z} = \int_{-R}^{-1-\epsilon} dx \frac{\sqrt{x^2-1}}{\lambda-x}$$
$$\int_{BC} dz \frac{\sqrt{z^2-1}}{\lambda-z} = i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \, \frac{\sqrt{(-1+\epsilon e^{i \phi})^2-1}}{\lambda+1-\epsilon e^{i \phi}} $$
$$\int_{CD} dz \frac{\sqrt{z^2-1}}{\lambda-z} = \int_{-1+\epsilon}^{\lambda-\epsilon} dx \frac{i \sqrt{1-x^2}}{\lambda-x}$$
$$\int_{DE} dz \frac{\sqrt{z^2-1}}{\lambda-z} = i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \, \frac{i \sqrt{1-(\lambda+\epsilon e^{i \phi})^2}}{-\epsilon e^{i \phi}} $$
$$\int_{EF} dz \frac{\sqrt{z^2-1}}{\lambda-z} = \int_{\lambda+\epsilon}^{1-\epsilon} dx \frac{i \sqrt{1-x^2}}{\lambda-x}$$
$$\int_{FG} dz \frac{\sqrt{z^2-1}}{\lambda-z} = i \epsilon \int_{\pi}^{-\pi} d\phi \, e^{i \phi} \, \frac{\sqrt{(\epsilon e^{i \phi})^2-1}}{\lambda-\epsilon e^{i \phi}} $$
$$\int_{GH} dz \frac{\sqrt{z^2-1}}{\lambda-z} = \int_{1-\epsilon}^{\lambda+\epsilon} dx \frac{-i \sqrt{1-x^2}}{\lambda-x}$$
$$\int_{HI} dz \frac{\sqrt{z^2-1}}{\lambda-z} = i \epsilon \int_{2 \pi}^{\pi} d\phi \, e^{i \phi} \, \frac{-i \sqrt{1-(\lambda+\epsilon e^{i \phi})^2}}{-\epsilon e^{i \phi}} $$
$$\int_{IJ} dz \frac{\sqrt{z^2-1}}{\lambda-z} = \int_{\lambda-\epsilon}^{-1+\epsilon} dx \frac{-i \sqrt{1-x^2}}{\lambda-x}$$
$$\int_{JK} dz \frac{\sqrt{z^2-1}}{\lambda-z} = i \epsilon \int_{2 \pi}^{\pi} d\phi \, e^{i \phi} \, \frac{\sqrt{(-1+\epsilon e^{i \phi})^2-1}}{\lambda+1-\epsilon e^{i \phi}} $$
$$\int_{KL} dz \frac{\sqrt{z^2-1}}{\lambda-z} = \int_{-1-\epsilon}^{-R} dx \frac{\sqrt{x^2-1}}{\lambda-x}$$
$$\int_{LA} dz \frac{\sqrt{z^2-1}}{\lambda-z} = i R \int_{-\pi}^{\pi} d\theta \, e^{i \theta} \frac{\sqrt{R^2 e^{i 2 \theta}-1}}{\lambda - R e^{i \theta}} $$
Note that, on the branch above the real axis, $-1=e^{i \pi}$ and on the branch below the real axis, $-1=e^{-i \pi}$. Thus, the sign of $i$ in front of the square root when $|x| \lt 1$ is positive above the real axis and negative below the real axis.
Now let's examine what happens when we combine the pieces above to form the contour integral. When we combine the integrals over $AB$ and $LK$, the respective integration directions are reversed, but the integrands are the same (as $|x| \gt 1$ here). Thus, these two integrals cancel.
However, when $|x| \lt 1$, the opposing signs of the integrands results in the addition rather than the cancellation of the integrals. Thus, in the limit as $\epsilon \to 0$, we have
$$\left (\int_{CD} + \int_{EF} + \int_{GH} + \int_{JK}\right) dz \frac{\sqrt{z^2-1}}{\lambda-z} = i 2 PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} $$
Note that we used the definition of the Cauchy principal value as the limit of the sum of the integrals over regions avoiding the pole at $x=\lambda$.
As $\epsilon \to 0$, the integrals over $BC$, $FG$, and $JK$ all vanish. Thus, we are left with the integrals over $DE$ and $HI$. In this case, the direction of the paths of integration are the same, but the integrands are of opposite sign. Thus, the sum of the integrals over $DE$ and $HI$ cancel.
After all that, we are left with, as the contour integral,
$$i 2 PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} + i R \int_{-\pi}^{\pi} d\theta \, e^{i \theta} \frac{\sqrt{R^2 e^{i 2 \theta}-1}}{\lambda - R e^{i \theta}}$$
Now we consider the contour integral as $R \to \infty$. In this case, we expand the integrand for large $R$:
$$\begin{align}i R e^{i \theta} \frac{\sqrt{R^2 e^{i 2 \theta}-1}}{\lambda - R e^{i \theta}} &= -i R e^{i \theta} \left [1 - \frac1{2 R^2 e^{i 2 \theta}} + \cdots \right ] \left [1+ \frac{\lambda}{R e^{i \theta}} + \frac{\lambda^2}{R^2 e^{i 2 \theta}} + \cdots \right ]\\ &= -i R e^{i \theta} - i \lambda - i \left (\lambda^2-\frac12 \right ) \frac1{R e^{i \theta}} + \cdots \end{align}$$
As we integrate these terms over a whole period $[-\pi,\pi]$, we find that all terms disappear except the $-i \lambda$ term. (This is what people refer to as the "residue at infinity.")
By Cauchy's theorem, we may set the contour integral to zero because there are no poles in the interior of the contour $C$. Thus,
$$i 2 PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} - i 2 \pi \lambda = 0$$
or, when $|\lambda| \lt 1$,
$$PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} = \pi \lambda$$
The enumeration of the other cases $\lambda \gt 1$ and $\lambda \lt -1$ should be easy to visualize now. For example, when $\lambda \gt 1$, we lose the bumps $DE$ and $HI$ (which contributed nothing to the contour integral previously), but now we have a pole within $C$ at $z=\lambda$. Thus, we may use the residue theorem (or simply extend the branch cut beyond $x=1$ and detour around the pole - same thing); we will find that
$$i 2 PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} - i 2 \pi \lambda = -i 2 \pi \sqrt{\lambda^2-1}$$
or, for $\lambda \gt 1$,
$$PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} = \pi \left ( \lambda - \sqrt{\lambda^2-1} \right )$$
For $\lambda \lt -1$, we may simply mirror the configuration for $\lambda \gt 1$, i.e., reverse direction and use the residue theorem, or introduce detours to the left of $x=-1$ in the figure. At this point, the reader can show that, for $\lambda \lt -1$,
$$i 2 PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} - i 2 \pi \lambda - i 2 \pi \sqrt{\lambda^2-1} = 0$$
Thus, to summarize,
$$PV \int_{-1}^1 dx \frac{\sqrt{1-x^2}}{\lambda-x} = \begin{cases} \pi \left ( \lambda + \sqrt{\lambda^2-1} \right ) & \lambda \lt -1 \\ \pi \lambda & -1 \lt \lambda \lt 1 \\ \pi \left ( \lambda - \sqrt{\lambda^2-1} \right ) & \lambda \gt 1 \end{cases} $$ | {"set_name": "stack_exchange", "score": 3, "question_id": 1546820} |
\section{Prerequisites}
\label{sec:prerequisites}
We first prove that (\ref{eq:mse_trace})
is strictly convex over all choices of power allocation, demonstrating
the uniqueness of the optimal power allocation.
We then introduce cyclically-symmetric functions, and prove that,
if such functions are strictly convex on a convex domain, then the unique
vector that attains their minimum has equal values.
The section concludes with the introduction of the asymptotic equivalence of
Toeplitz and circulant matrices using material from \cite{gray_toeplitz} and
\cite{pearl73stationarydft}.
\subsection{Power Allocation Vector that Minimizes MSE is Unique}
\begin{lemma}
\label{lemma:convex}
If $\mathbf{A}$ is a real symmetric positive-definite $n\times n$ matrix,
then the function $f(\mathbf{x})=\trace[(\mathbf{A}+\diag(\mathbf{x}))^{-1}]$
is strictly convex within the polytope $\sum_{i=0}^{n-1}x_i=C$, $x_i>0$.
\end{lemma}
\begin{IEEEproof}
Since the domain of $f(\mathbf{x})$ is
convex \cite[Ch.~2.1.2 and 2.1.4]{boyd04convexopt},
$f(\mathbf{x})$ is strictly convex in $\mathbf{x}$ if and only if
$g(t)=f(\mathbf{x}+t\mathbf{v})$ is strictly convex in $t$ for any
$t\in\mathbb{R}$ and $\mathbf{v}\in\mathbb{R}^n$ such that
$\mathbf{x}+t\mathbf{v}$ is in the domain of $f(\mathbf{x})$
(i.e. $\mathbf{x}+t\mathbf{v}$ is a real vector with positive entries that
sum to $C$)
\cite[Ch.~3.1.1]{boyd04convexopt}.
This follows directly from the definition of convexity
and is known
as \emph{the method of restriction to a line}.
Define $\mathbf{B}\equiv\mathbf{A}+\diag(\mathbf{x})+t\diag(\mathbf{v})$
and note that it is symmetric positive-definite (as is $\mathbf{B}^{-1}$)
since $\mathbf{A}$ is
symmetric positive-definite and $\diag(\mathbf{x})+t\diag(\mathbf{v})$
is a diagonal matrix with positive entries on the diagonal.
Consider
$h(t)=\mathbf{u}^T\mathbf{B}^{-1}\mathbf{u}$,
where $\mathbf{u}$ is an arbitrary non-zero vector.
Its first two derivatives with respect to $t$ are
\cite[Ch.~D.2.1]{dattorro05convexopt}:
\begin{eqnarray}
h'(t)&=&-\mathbf{u}^T\mathbf{B}^{-1}\diag(\mathbf{v})\mathbf{B}^{-1}\mathbf{u}\\
\label{eq:d2h}h''(t)&=&2\mathbf{w}^T\mathbf{B}^{-1}\mathbf{w}
\end{eqnarray}
where $\mathbf{w}=\diag(\mathbf{v})\mathbf{B}^{-1}\mathbf{u}$.
We can substitute $\mathbf{w}$ into \eqref{eq:d2h} since $\mathbf{B}$ is
symmetric.
Also, since $\mathbf{B}^{-1}$ is positive-definite, $h''(t)> 0$, implying
that $h(t)$ is strictly convex in $t$.
Now
\begin{eqnarray}
g(t)&=&\trace[(\mathbf{A}+\diag(\mathbf{x})+t\diag(\mathbf{v}))^{-1}]\\
\label{eq:gtsum}&=&\sum_{i=0}^{n-1}\mathbf{e}_i^T(\mathbf{A}+\diag(\mathbf{x})+t\diag(\mathbf{v}))^{-1}\mathbf{e}_i
\end{eqnarray}
where $\mathbf{e}_i$ is a vector containing one in the $i^{\text{th}}$
location and zeros everywhere else.
Since each summand of (\ref{eq:gtsum}) can be written down as $h(t)$ (with
$\mathbf{e}_i$ replacing $\mathbf{u}$) and since the sum preserves
convexity, $g(t)$ is strictly convex in $t$.
Therefore, $f(\mathbf{x})$ is strictly convex in $\mathbf{x}$.
\end{IEEEproof}
Since $\mathbf{R}_n$ is symmetric positive-definite,
so is its inverse.
Also $\mathbf{D}_n^{-1}=\frac{\diag(\mathbf{p}^{(n)})}{\sigma^2}$.
Thus, by Lemma \ref{lemma:convex}, $\mathcal{E}(\mathbf{p}^{(n)})$ is strictly
convex and $\mathbf{p}^{(n)}_{\text{opt}}$
that minimizes MSE in our problem is unique.
\subsection{Cyclically-symmetric Functions}
We next introduce a class of symmetric functions and
prove a useful property about them.
\begin{definition}[Cyclically-symmetric function] $f(x_0,x_1,\ldots,x_{n-1})$
is \emph{cyclically-symmetric} if
\begin{eqnarray*}
f(x_0,x_1,\ldots,x_{n-1})=f(x_1,\ldots,x_{n-1},x_0)
\end{eqnarray*}
\end{definition}
\begin{lemma}
\label{lemma:circ}
Suppose $f(x_0,x_1,\ldots,x_{n-1})$ is strictly convex and cyclically-symmetric
on a convex domain $\mathcal{S}$.
If vector
$\mathbf{x}^*=\argmin_{\mathbf{x}\in\mathcal{S}}f(x_0,x_1,\ldots,x_{n-1})$, then $x^*_0=x^*_1=\ldots=x^*_{n-1}$.
\end{lemma}
\begin{IEEEproof}
Since $f(x_0,x_1,\ldots,x_{n-1})$ is strictly convex,
$\mathbf{x}^*$ is unique.
Since $f(x_0,x_1,\ldots,x_{n-1})$ is cyclically-symmetric, then,
for all $i=1,\ldots,n-1$,
$\mathbf{x}^*$ also minimizes
$f(x_i,\ldots,x_{n-1},x_1,\ldots,x_{i-1})$.
Thus, $x^*_0=x^*_1=\ldots=x^*_{n-1}$.
\end{IEEEproof}
\subsection{Asymptotically Equivalent Matrices}
Results on the asymptotic equivalence of matrix sequences in
\cite[Ch.~2]{gray_toeplitz} enable the discussion of
the Toeplitz and circulant matrices at the end of this section.
First, let $\mathbf{A}$ be a real-valued $n\times n$ matrix.
Then we define the matrix norms as follows:
\begin{definition}[Strong norm]
$\|\mathbf{A}\|=\max_{\mathbf{z}:\mathbf{z}^T\mathbf{z}=1}\left[\mathbf{z}^T\mathbf{A}^T\mathbf{A}\mathbf{z}\right]^{1/2}$.
\end{definition}
\begin{definition}[Weak norm]
$|\mathbf{A}|=\sqrt{\frac{1}{n}\trace\left[\mathbf{A}^H\mathbf{A}\right]}$.
\end{definition}
\noindent If $\mathbf{A}$ is symmetric positive-definite with eigenvalues
$\{\lambda_i\}_{i=0}^{n-1}$,
$|\mathbf{A}|=\sqrt{\frac{1}{n}\sum_{i=0}^{n-1}\lambda^2_i}$.
Also, $\|\mathbf{A}\|=\lambda_{\max}$ and
$\|\mathbf{A}^{-1}\|=1/\lambda_{\min}$,
where $\lambda_{\max}$ and $\lambda_{\min}$ are the maximum and minimum
eigenvalues of $\mathbf{A}$, respectively.
\begin{lemma}[Lemma 2.3 in \cite{gray_toeplitz}]\label{lemma:mtxprodnorm}For $n\times n$ matrices $\mathbf{A}$ and $\mathbf{B}$,
$|\mathbf{AB}|\leq\|\mathbf{A}\|\cdot|\mathbf{B}|$.
\end{lemma}
\noindent Now define the asymptotic equivalence of matrix sequences as
\cite[Ch.~2.3]{gray_toeplitz}:
\begin{definition}[Asymptotically Equivalent Sequences of Matrices] The
sequences of
$n\times n$ matrices $\{\mathbf{A}_n\}$ and $\{\mathbf{B}_n\}$ are said to be
\emph{asymptotically equivalent} if the following hold:
\begin{eqnarray}
\label{eq:boundnorm}&\|\mathbf{A}_n\|,\|\mathbf{B}_n\|\leq M<\infty,n=1,2,\ldots&\\
&\lim_{n\rightarrow\infty}|\mathbf{A}_n-\mathbf{B}_n|=0&
\end{eqnarray}
\end{definition}
\noindent We abbreviate the asymptotic equivalence of the sequences
$\{\mathbf{A}_n\}$ and $\{\mathbf{B}_n\}$ by $\mathbf{A}_n\sim \mathbf{B}_n$.
Properties of asymptotic equivalence are stated and proved in
\cite[Theorem 2.1]{gray_toeplitz}.
A property particularly useful in the proof of Theorem \ref{th:main} is
re-stated here as a lemma:
\begin{lemma}
\label{lemma:aeqinv}
If $\mathbf{A}_n\sim \mathbf{B}_n$ and
$\|\mathbf{A}_n^{-1}\|,\|\mathbf{B}_n^{-1}\|\leq K<\infty,n=1,2,\ldots$, then
$\mathbf{A}_n^{-1}\sim \mathbf{B}_n^{-1}$.
\end{lemma}
\begin{IEEEproof}
$|\mathbf{A}_n^{-1}-\mathbf{B}_n^{-1}|=|\mathbf{B}_n^{-1}\mathbf{B}_n\mathbf{A}_n^{-1}-\mathbf{B}_n^{-1}\mathbf{A}_n\mathbf{A}_n^{-1}|\leq\|\mathbf{B}_n^{-1}\|\cdot\|\mathbf{A}_n^{-1}\|\cdot|\mathbf{A}_n-\mathbf{B}_n|\xrightarrow[n\rightarrow\infty]{} 0$,
where the inequality is due to Lemma \ref{lemma:mtxprodnorm}.
\end{IEEEproof}
\noindent Another important consequence of asymptotic equivalence follows from
\cite[Corollary 2.1]{gray_toeplitz}:
\begin{lemma}
\label{lemma:aeqtrace}
If $\mathbf{A}_n\sim \mathbf{B}_n$, then
$\lim_{n\rightarrow\infty}\frac{1}{n}\trace[\mathbf{A}_n]=\lim_{n\rightarrow\infty}\frac{1}{n}\trace[\mathbf{B}_n]$
when either limit exists.
\end{lemma}
\begin{IEEEproof}[Proof sketch] By the Cauchy-Schwarz inequality,
$\left|\frac{\trace[\mathbf{A}_n-\mathbf{B}_n]}{n}\right|\leq|\mathbf{A}_n-\mathbf{B}_n|\xrightarrow[n\rightarrow\infty]{} 0$.
\end{IEEEproof}
\subsection{Sequences of Toeplitz and Circulant Matrices}
\label{sec:circ}
An $n\times n$ \emph{Toeplitz matrix} $\mathbf{T}_n$, illustrated in
\figurename~\subref*{fig:toeplitz},
is defined by a sequence $\{t_k^{(n)}\}$ where
$\left(\mathbf{T}_n\right)_{i,j}=t_{i-j}$.
The covariance matrix $\mathbf{R}_n$ in Section \ref{sec:problem} is Toeplitz
and symmetric.
An $n\times n$ \emph{circulant matrix} $\mathbf{C}_n$, illustrated in
\figurename~\subref*{fig:circ},
is defined by a sequence $\{c_k^{(n)}\}$ where
$\left(\mathbf{C}_n\right)_{i,j}=c^{(n)}_{(j-i)\bmod{n}}$.
Since the sequence $\{R_x(kT_s)\}_{k=0}^{n-1}$ that defines $\mathbf{R}_n$ is
square summable, we can define an asymptotically equivalent sequence of
circulant matrices $\mathbf{C}_n\sim \mathbf{R}_n$ using
\cite[Eq.~(7)]{pearl73stationarydft}:
\begin{eqnarray}
\label{eq:c_k}c^{(n)}_k=R_x(kT_s)+\frac{k}{n}\left(R_x((n-k)T_s)-R_x(kT_s)\right)
\end{eqnarray}
The resulting circulant matrix $\mathbf{C}_n$ is symmetric since, by
\eqref{eq:c_k}, $c^{(n)}_k=c^{(n)}_{n-k}$.
\begin{figure}[h]
\begin{center}
\subfloat[Toeplitz matrix]{\label{fig:toeplitz}\scalebox{0.75}{$\mathbf{T}_n=\left[\begin{array}{cccc}t_0^{(n)}&t_{-1}^{(n)}&\cdots&t_{-(n-1)}^{(n)}\\t_{1}^{(n)}&t_0^{(n)}&\cdots&t_{-(n-2)}^{(n)}\\\vdots&&\ddots&\vdots\\t_{n-1}^{(n)}&t_{n-2}^{(n)}&\cdots&t_0^{(n)}\end{array}\right]$}}
\hfil
\subfloat[Circulant matrix]{\label{fig:circ}\scalebox{0.75}{$\mathbf{C}_n=\left[\begin{array}{cccc}c_0^{(n)}&c_1^{(n)}&\cdots&c_{n-1}^{(n)}\\c_{n-1}^{(n)}&c_0^{(n)}&\cdots&c_{n-2}^{(n)}\\\vdots&&\ddots&\vdots\\c_1^{(n)}&c_{2}^{(n)}&\cdots&c_0^{(n)}\end{array}\right]$}}
\end{center}
\caption{Illustration of Toeplitz and circulant matrices.}
\end{figure}
By \cite[Eq.~(5)]{pearl73stationarydft},
$\mathbf{C}_n\triangleq\mathbf{F}_n^{-1}\mathbf{\Delta}_n\mathbf{F}_n$,
where
$\mathbf{\Delta}_n=\diag \left(\left\{\nu^{(n)}_i\right\}_{i=0}^{n-1}\right)$ contains
the diagonal entries
$\nu^{(n)}_i=\left(\mathbf{F}_n\mathbf{R}_n\mathbf{F}_n^{-1}\right)_{i,i}$
of the covariance matrix of the discrete Fourier
transform (DFT) $\mathbf{F}_n\mathbf{x}^{(n)}$ of $\mathbf{x}^{(n)}$ and
$(\mathbf{F}_n)_{i,k}=\frac{1}{\sqrt{n}}e^{2\pi i k j/n}$ is the DFT rotation
matrix.
Since $\mathbf{R}_n$ is positive-definite, by the properties of the similarity
transformation, $\mathbf{F}_n\mathbf{R}_n\mathbf{F}_n^{-1}$ is
positive-definite and has positive diagonal entries.
Thus, $\mathbf{\Delta}_n$ is positive-definite and so is $\mathbf{C}_n$. | {"config": "arxiv", "file": "1212.2316/prerequisites.tex"} |
\section{Branch and Bound Based Algorithm} \label{section:relsel}
In this section, we develop a Branch-and-Bound based Algorithm (BBA) for the optimal solution of Problem~(\ref{eq:relSel}). Branch-and-Bound (BB) is a tree based search algorithm and BB based techniques are commonly used for the solution of non-convex MINLPs \cite{PietroBelotti2012}. They systematically explore the solution space by dividing it into smaller sub-spaces, i.e., branching. The entire solution space is represented by the root of the BB-tree and each resulting sub-space after partitioning is represented by a BB-tree node. Before creating new branches, the current BB-tree node is checked against lower and upper bounds on the optimal solution and is discarded if a better solution than the incumbent solution cannot be obtained, i.e, pruning. We adapt BB for our problem by developing problem specific lower and upper bound generation methods and integrating it with POWMU.
BBA starts with the MINLP formulation given in Problem~(\ref{eq:relSel}). The relaxation of the problem is obtained and solved. If $b_i^j$s in the solution of the relaxed problem are fractional, BBA continues with branching. A source is selected by $i' = \argmax_{i,j, i\in \{1,\ldots,N\}, j\in \{0,\ldots,K\}}b_i^j$. Then, branching is performed by assigning one of the relays $R_j, j=0,\ldots,K$ to the $i^{'th}$ source at each new branch. As a result, $K+1$ new nodes are created, where each new node corresponds to the $j^{th}$, $j=0,\ldots, K$ , relay selected for the $i^{'th}$ source. This selection is forced by setting $ b_{i'}^j$ to 1 if the $j^{th}$ relay is selected, and 0 otherwise. It is possible to continue branching on other variables, however, this is not necessary as after relays are selected, the problem can be solved by POWMU.
During the BB-search, the solution of the relaxed problem is used in the branching decision and the objective value of the relaxed problem is considered as a lower bound on the objective of the original problem. We derive the relaxed problem and the lower bound of Problem~(\ref{eq:relSel}) in Section~\ref{section:rp}. Moreover, an incumbent solution is stored. To update the incumbent solution, in each node to be branched, an upper bound is calculated. The upper bound can be any feasible solution in the sub-space. If the upper bound of a node is lower than the incumbent, then it is set as the incumbent. We present the upper bound generation for Problem~(\ref{eq:relSel}) in Section \ref{section:ub}.
A BB-tree node is pruned, if the further search of the corresponding sub-space is not necessary. We prune a node if a feasible solution cannot be found at that node; the lower bound of the node is greater than the incumbent solution, as the node cannot provide a better solution than the current one; or all $b_i^j$s of the solution are integers, as we call POWMU to obtain the minimum schedule length for the determined relay selection, and this is the optimum solution in the corresponding sub-space. The algorithm continues until all BB-nodes are pruned.
\subsection{Lower Bound Generation} \label{section:rp}
A convex relaxation of the sub-problem is solved to optimality to find a lower bound on the objective. The relaxation of Problem~(\ref{eq:relSel}) is obtained as follows.
\begin{itemize}
\item The integrality constraint Eq.~(\ref{eq:relSel:integrality}) of Problem~(\ref{eq:relSel}) is relaxed as $0 \leq b_i^j \leq 1$.
\item In Problem~(\ref{eq:relSel}), $P_{S_i}^{R_j} \tau_{S_i}^{R_j}$ and $P_{R_j}^{AP} \tau_{R_j}^{AP}$ are replaced by new variables $A_{S_i}^{R_j}$ and $A_{R_j}^{AP}$, respectively, so that constraints (\ref{eq:relSel:energy1})-(\ref{eq:relSel:demand2}) represent convex sets.
\item After the replacement, Eqs.~(\ref{eq:relSel:maxpow1}) and (\ref{eq:relSel:maxpow2}) include a product of two variables (e.g., $b_i^j \tau_{S_i}^{R_j}$ and $b_i^j \tau_{R_j}^{AP}$), which causes the non-convexity. Instead of these products, we define their convex/concave envelopes, lower and upper bounding constraints, which is a common relaxation technique proposed in \cite{PietroBelotti2012}.
\end{itemize}
Now, the relaxed problem can be solved by a convex optimization tool to find the lower bound, e.g., CVX \cite{cvx}.
\subsection{Upper Bound Generation}\label{section:ub}
For each node, any feasible solution to the problem in the corresponding sub-space can be an upper bound. The initial upper bound, also the initial incumbent solution, is obtained by assigning all sources directly to AP, which is the simplest feasible solution. In other BB-tree nodes, we obtain the upper bound as follows. The relaxed problem is already solved for lower bound generation and the resulting $b_i^j$s are known. For each source $S_i$, the relay with the maximum $b_i^j$ is selected, i.e., $R_ j =\argmax_ {j\in\{0,1,\ldots,K\}} b_i^j, \; \; i =1, \ldots, N$. Then, POWMU acquires a feasible solution resulting in the minimum schedule length for the determined relay selection.
\section{Branch-and-Bound Based Heuristics} \label{section:sBB2}
BBA searches for all possible relay selection options, $K^N$ nodes, in the worst case, which results in an exponential run-time. We now propose the BB-based heuristic algorithms for better time complexity.
\subsubsection{One Branch Heuristic (OBH)} OBH explores only one branch of the Branch-and-Bound tree. It starts by obtaining the relaxation of Problem~(\ref{eq:relSel}), as described in Section~\ref{section:rp}. The relaxed problem is solved to obtain initial fractional values of $b_i^j$s. Among the fractional $b_i^j$s, the maximum one is selected and it is forced to be 1 in the next iteration. OBH repeatedly solves the relaxed problem containing $b_i^j$ values set to 1 in the previous iterations, finds the maximum of the fractional $b_i^j$ in the optimal solution of the resulting problem and forces it to be 1. This continues until the relay selection of all the source nodes. Finally, POWMU provides the solution for the resulting relay selection.
In OBH, $N-1$ relaxed problems are solved and POWMU is called only once. The complexity of solving the relaxed problem depends on the solver and is ambiguous. It is highly dependent on the number of variables, so rapidly increases with increasing number of sources or relays. Let us denote this complexity with $\mathcal{O}(C)$. The complexity of POWMU is given in Section~\ref{section:optAlg}. Thus, the overall complexity is $\mathcal{O}((N+1)C + Nlog_2(\frac{ub-lb}{\epsilon}))$
\subsubsection{Relaxed Problem Based Heuristic (RPH)}: RPH considers only the relaxed problem and does not perform branching. It obtains and solves the relaxation of Problem~(\ref{eq:relSel}) as in Section~\ref{section:rp}. This solution gives fractional $b_i^j$ values. Then, the relay $R_j$ with $\max_{j\in \{0,\ldots,K\}} b_i^j$ is assigned to each source $S_i$, for $i =1, \ldots, N$. After relay selection, the total EH and IT times is obtained by POWMU. Note that this is the same approach that we use to determine upper bounds of each node in BBA.
RPH solves only one relaxed problem and calls POWMU once. The overall complexity is $\mathcal{O}(C + Nlog_2(\frac{ub-lb}{\epsilon}))$ | {"config": "arxiv", "file": "2002.00611/tex_files/sbb.tex"} |
\begin{document}
\title {Rational taxation in an open access fishery model}
\author{Dmitry B. Rokhlin}
\author{Anatoly Usov}
\address{Institute of Mathematics, Mechanics and Computer Sciences,
Southern Federal University,
Mil'chakova str., 8a, 344090, Rostov-on-Don, Russia}
\email[Dmitry B. Rokhlin]{rokhlin@math.rsu.ru}
\email[Anatoly Usov]{usov@math.rsu.ru}
\thanks{The research is supported by Southern Federal University, project 213.01-07-2014/07.}
\begin{abstract}
We consider a model of fishery management, where $n$ agents exploit a single population with strictly concave continuously differentiable growth function of Verhulst type. If the agent actions are coordinated and directed towards the maximization of
the discounted cooperative revenue, then the biomass stabilizes at the level, defined by the well known ``golden rule''. We show that for independent myopic harvesting agents such optimal (or $\varepsilon$-optimal) cooperative behavior can be stimulated by the proportional tax, depending on the resource stock, and equal to the marginal value function of the cooperative problem. To implement this taxation scheme we prove that the mentioned value function is strictly concave and continuously differentiable, although the instantaneous individual revenues may be neither concave nor differentiable.
\end{abstract}
\subjclass[2010]{91B76, 49J15, 91B64}
\keywords{Optimal harvesting, marginal value function, stimulating prices, myopic agents, optimal control}
\maketitle
\section{Introduction}
\label{sec:1}
An unregulated open access to marine resources, where many individual users are involved in the fishery, may easily lead to the over-exploitation or even extinction of fish populations. Moreover, it results in zero rent. These negative consequences of the unregulated open access (the ''tragedy of commons'': \cite{Har68}) were widely discussed in the literature: see \cite{Gor54,Cla79,Cla06,Arn09}. May be the most evident reason for the occurrence of these phenomena is the myopic behavior of competing harvesting agents, who are interested in the maximization of instantaneous profit flows, and not in the conservation of the population in the long run. In the present paper we consider the problem of rational regulation of an open access fishery, using taxes as the only economical instrument. Other known instruments include fishing quotas of different nature, total allowable catch, limited entry, sole ownership, community rights, various economic restrictions, etc: see, e.g, \cite{Cla06,Arn09}.
Assume for a moment that $n$ agents coordinate their efforts to maximize the aggregated long-run discounted profit. The related aggregated agent, which can be considered as a sole owner of marine fishery resources, conserves the resource under optimal strategy, unless the discounting rate is very large. How such an acceptable cooperative behavior can be realized in practice?
We consider the following scheme. Suppose that some regulator (e.g., the coastal states), being aware of the revenue function and maximal productivity of each agent, declares the amount of proportional tax on catch. Roughly speaking, it turns out that if this tax is equal to the marginal indirect utility (marginal value function) of the cooperative optimization problem, then the myopic profit maximizing agents will follow an optimal cooperative strategy, maximizing the aggregated long-run discounted profit. The idea of using such taxes in harvesting management was often expressed in the bioeconomic literature: see \cite{Cla80}, \cite{McKel89}, \cite[Chapter 10]{HanShoWhi97}, \cite[Chapter 7]{Il09}. Our goal is to study this idea more closely from the mathematical point of view.
The first theoretical question we encounter, trying to implement the mentioned taxation scheme, concerns the differentiability of the value function $v$ of the cooperative problem. Assuming that the population growth function is strictly concave and continuously differentiable, in Sections \ref{sec:2} and \ref{sec:3} we prove $v$ inherits these properties, although the instantaneous revenue functions may be non-concave.
The differentiability of $v$ is proved by the tools from optimal control and convex analysis. Our approach relies on the characterization of $v$ as the unique solution of the related Hamilton-Jacobi-Bellamn equation.
We neither use the general results like \cite{RinZapSan12}, nor the related technique. At the same time, our results are not covered by \cite{RinZapSan12}. Simultaneously we construct optimal strategies and prove that optimal trajectories are attracted to the biomass level $\widehat x$, defined by the well known ``golden rule''. This level depends on the discounting rate, which is at regulator's disposal.
If the agent revenue function are non-concave, then an optimal solution of the infinite horizon cooperative problem may exist only in the class of relaxed (or randomized) harvesting strategies. Such strategies can hardly be realized in practice, and certainly cannot be stimulated by taxes. Nevertheless, in Section \ref{sec:4} we show that piecewise constant strategies (known as the ``pulse fishing'') of myopic agents, stimulated by the proportional tax $v'\alpha$ on the fishing intensity $\alpha$, are $\varepsilon$-optimal for the cooperative problem. Moreover, the related trajectory is retained in any desired neighbourhood of $\widehat x$ for large values of time. Finally, we introduce the notion of the critical tax $v'(\widehat x)$ and prove that it can only increase, when the agent community widens.
\section{Cooperative harvesting problem: the case of concave revenues}
\label{sec:2}
Let a population biomass $X$ satisfy the differential equation
\begin{equation} \label{2.1}
X_t=x+\int_0^t b(X_s)\,ds-\sum_{i=1}^n \int_0^t \alpha_s^i\,ds,
\end{equation}
where $b$ is the growth rate of the population, and $\alpha^i$ is the harvesting rate of $i$-th agent. We assume that $b$ is a \emph{differentiable strictly concave} function defined on an open neighbourhood of $[0,1]$, and
$$b(x)>0,\quad x\in (0,1),\quad b(0)=b(1)=0.$$
The widely used Verhulst growth function $b(x)=x(1-x)$ is a typical example. Agent harvesting strategies $\alpha^i$ are (Borel) measurable functions with values in the intervals $[0,\overline \alpha^i]$, $\overline\alpha^i>0$. A harvesting strategy $\alpha=(\alpha^1,\dots,\alpha^n)$ is called \emph{admissible} if the solution $X^{x,\alpha}$ of (\ref{2.1}) stays in $[0,1]$ forever: $X_t^{x,\alpha}\in [0,1]$, $t\ge 0$. Note that for given $\alpha$ the solution $X^{x,\alpha}$ is unique, since $b$, being concave, is Lipschitz continuous. The set of admissible strategies, corresponding to an initial condition $x$, is denoted by $\mathscr A_n(x)$.
Consider the cooperative objective functional
$$ J_n(x,\alpha)=\sum_{i=1}^n\int_0^\infty e^{-\beta t} f_i(\alpha_t^i)\,dt,\quad \beta>0$$
of agent community. \emph{We always assume} that the instantaneous revenue function $f_i:[0,\overline \alpha^i]\mapsto\mathbb R_+$ of $i$-th agent is at least \emph{continuous}, and $f_i(0)=0$. Let
\begin{equation} \label{2.2}
v(x)=\sup_{\alpha\in\mathscr A_n(x)} J_n(x,\alpha),\quad x\in [0,1]
\end{equation}
be the value function of the cooperative optimization problem.
When studying the properties of the value function it is convenient to reduce the dimension of the control vector to $1$. Recall that the function
$$ (g_1\oplus\dots\oplus g_n)(x)=\sup\{g_1(x_1)+\dots+g_n(x_n):x_1+\dots+x_n=x\}$$
is called the \emph{infimal convolution} of $g_1,\dots,g_n$. Let us extend the functions $f_i$ to $\mathbb R$ by the values $f_i(u)=-\infty$, $u\not\in [0,\overline\alpha^i]$ and put
\begin{align} \label{2.3}
F(q)& =\sup\{f_1(\alpha_1)+\dots+f_n(\alpha_n):\alpha_1+\dots+\alpha_n=q\}\nonumber\\
& =-((-f_1)\oplus\dots\oplus(-f_n))(q).
\end{align}
The function $F$ is finite on $[0,\overline q]$, $\overline q=\sum_{i=1}^n\overline\alpha^i$, and takes the value $-\infty$ otherwise. From the properties of an infimal convolution it follows that if $f_i$ are continuous (resp., concave), then $F$ is also continuous (resp., concave): see, e.g., \cite{Str96} (Corollary 2.1 and Theorem 3.1).
Let $q:\mathbb R_+\mapsto [0,\overline q]$ be a measurable function. Consider the equation
\begin{equation} \label{2.4}
X_t^{x,q}=x+\int_0^t b(X_s^{x,q})\,ds-\int_0^t q_s\,ds
\end{equation}
instead of (\ref{2.1}). If $X_t^{x,q}\ge 0$, then the strategy $q$ is called admissible. The set of such strategies is denoted by $\mathscr A(x)$. Using an appropriate measurable selection theorem (see \cite[Theorem 5.3.1]{Sri98}), we conclude that for any $q\in\mathscr A(x)$ there exists $\alpha\in\mathscr A_n(x)$ such that $F(q_t)=\sum_{i=1}^n f_i(\alpha^i_t)$. It follows that the value function (\ref{2.2}) admits the representation
$$ v(x)=\sup_{q\in\mathscr A(x)} J(x,q),\quad J(x,q)=\int_0^\infty e^{-\beta t} F(q_t)\,dt.$$
Clearly, for any measurable control $q:\mathbb R_+\mapsto [0,\overline q]$ the trajectory $X^{x,q}$ cannot leave the interval $[0,1]$ through the right boundary. Denote by
$$ \tau^{x,q}=\inf\{t\ge 0:X_t^{x,q}=0\}$$
the time of population extinction. As usual, we put $\tau^{x,\alpha}=+\infty$ if $X^{x,\alpha}>0$. Note that $q_t=0$, $t\ge\tau^{x,q}$ for any admissible control $q$.
First, we prove directly that $v$ inherits the concavity property of $f_i$ (see Lemma \ref{lem:2} below).
\begin{lemma} \label{lem:1}
Let $Y$ be a continuous solution of the inequality
$$Y_t\le x+\int_0^t b(Y_s)\,ds-\int_0^t q_s\,ds.$$
Then $Y_t\le X^{x,q}_t$, $t\le\tau:=\inf\{s\ge 0:Y_s=0\}$.
\end{lemma}
\begin{proof}
We follow \cite{BirRot89} (Chapter 1, Theorem 7).
Assume that $Y_{t_1}>X^{x,q}_{t_1}$, $t_1\le\tau$. Let
$t_0=\max\{t\in [0,t_1]:Y_{t}\le X^{x,q}_{t}\}$. We have
\begin{equation} \label{2.5}
Y_{t_0}=X^{x,q}_{t_0},\quad Y_{t}>X^{x,q}_{t},\quad t\in (t_0,t_1].
\end{equation}
The function $Z=Y-X^{x,q}$ satisfies the inequality
$$ 0\le Z_t\le\int_{t_0}^t (b(Y_s)-b(X^{x,q}_s))\,ds\le K \int_{t_0}^t Z_s\,ds,\quad t\in [t_0,t_1],$$
where $K$ is the Lipschitz constant of $b$. By the Gronwall inequality (see, e.g., \cite[Theorem 1.2.1]{Pac98}) we get a contradiction with (\ref{2.5}): $Z_t=0$, $t\in[t_0,t_1]$.
\end{proof}
\begin{lemma} \label{lem:2}
The function $v$ is non-decreasing. If $f_i$ are concave, then $v$ is concave.
\end{lemma}
\begin{proof}
Let $q\in\mathscr A(x)$ and $y>x$. Then
$$ X^{x,q}_t\le y+\int_0^t b(X^{x,q}_s)\,ds-\int_{t_0}^t q_s\,ds.$$
By Lemma \ref{lem:1} we have $X^{x,q}_t\le X^{y,q}_t$ for $t\le\tau^{x,q}$, and hence for all $t\ge 0$. It follows that $\mathscr A(x)\subset\mathscr A(y)$ and $v(x)\le v(y)$.
Let $0\le x^1<x^2\le 1$, $x=\gamma_1 x^1+\gamma_2 x^2$, $\gamma_1,\gamma_2>0$, $\gamma_1+\gamma_2=1$. For $q^i\in\mathscr A(x^i)$ by the concavity of $b$ we have
$$ \gamma_1 X_t^{x^1,q^1}+\gamma_2 X_t^{x^2,q^2}\le x + \int_0^t b(\gamma_1 X_t^{x^1,q^1}+\gamma_2 X_t^{x^2,q^2})\,dt-
\int_0^t(\gamma_1 q_t^1+\gamma_2 q_t^2)\,dt.$$
Put $q=\gamma_1 q^1+\gamma_2 q^2$.
Applying Lemma \ref{lem:1} to $Y=\gamma_1 X^{x^1,q^1}+\gamma_2 X^{x^2,q^2}$ and $X^{x,q}$ we get the inequality $Y\le X^{x,q}$. It follows that $q\in\mathscr A(x)$. By the concavity of $F$ we obtain:
$$ J(x,q)\ge\int_0^\infty e^{-\beta t}\left(\gamma_1 F(q_t^1)+\gamma_2 F(q_t^2)\right)\,dt=\gamma_1 J(x^1,q^1)+\gamma_2 J(x^2,q^2).$$
It follows that $v$ is concave: $v(x)\ge\gamma_1 v(x^1)+\gamma_2 v(x^2)$.
\end{proof}
Let us introduce the Hamiltonian
\begin{align}
H(x,z) &=b(x) z+\widehat F(z),\nonumber\\
\widehat F(z)&=\sup_{q\in[0,\overline q]}(F(q)-q z)=
\max_{q\in[0,\overline\alpha_1+\dots+\overline\alpha_n]}\max\left\{\sum_{i=1}^n f_i(\alpha_i)-zq:\sum_{j=1}^n\alpha_j=q\right\}\nonumber\\
&=\sum_{i=1}^n\max_{\alpha_i\in[0,\overline\alpha^i]} (f_i(\alpha_i)-z\alpha_i).\label{2.6}
\end{align}
Recall that a continuous function $w:[0,1]\mapsto\mathbb R$ is called a \emph{viscosity subsolution} (resp., a \emph{viscosity supersolution}) of the Hamilton-Jacobi-Bellman (HJB) equation
\begin{equation} \label{2.7}
\beta w(x)-H(x,w'(x))=0
\end{equation}
on a set $K\subset [0,1]$, if for any $x\in K$ and any test function $\varphi\in C^1(\mathbb R)$ such that $x$ is a local maximum (resp., minimum) point of $w-\varphi$, relative to $K$, the inequality
$$ \beta w(x)-H(x,\varphi'(x))\le 0\quad (\textrm{resp.},\ \ge 0)$$
holds true. A function $w\in C([0,1])$ is called a \emph{constrained viscosity solution} (see \cite{Son86}) of (\ref{2.7}) if $u$ is a viscosity subsolution on $[0,1]$ and a viscosity supersolution on $(0,1)$.
By Lemma \ref{lem:2} the value function is continuous. Hence, by Theorem 2.1 of \cite{Son86}, we conclude that $v$ is the unique constrained viscosity solution of (\ref{2.7}). However, in our case it is possible to give a more simple characterization of $v$.
\begin{lemma} \label{lem:3}
Assume that $f_i$ are concave. Then $v$ is the unique continuous function on $[0,1]$, with $v(0)=0$, satisfying the HJB equation (\ref{2.7}) on $(0,1)$ in the viscosity sense.
\end{lemma}
\begin{proof}
Since the equality $v(0)=0$ follows from the definition of $v$, we need only to prove that a continuous function $w$ with $w(0)=0$, satisfying the equation (\ref{2.7}) on $(0,1)$ in the viscosity sense, is uniquely defined. To do this we simply show that $w$ is a viscosity subsolution of (\ref{2.7}) on $[0,1]$ and refer to the cited result of \cite{Son86}.
The inequality
$$ 0=\beta w(0)\le H(0,\varphi'(0))=\widehat F(\varphi'(0))$$
is evident (for any $\varphi\in C^1(\mathbb R)$). Furthermore, in the terminology of \cite[Definitions 2 and 4]{CraNew85}, the point $x=1$ is \emph{irrelevant} and \emph{regular} for the left-hand side of the HJB equation. These properties follow from the fact that $z\mapsto\widehat F(z)$ is non-increasing and $b(1)=0$. By the result of \cite{CraNew85} (Theorem 2), $w$ automatically satisfies the equation (\ref{2.7}) in the viscosity sense on $(0,1]$.
\end{proof}
The subsequent study of the value function strongly relies on its characterization given in Lemma \ref{lem:3}.
Let
\begin{align*}
\partial w(x) &=\{\gamma\in\mathbb R: w(y)-w(x)\ge\gamma(y-x)\},\\
\partial^+ w(x) &=\{\gamma\in\mathbb R: w(y)-w(x)\le\gamma(y-x)\}
\end{align*}
be the sub- and superdifferential of a function $w$. Since $H(x,p)$ is convex in $p$ and satisfies the inequality
$$ |H(x,p)-H(y,p)|=|(b(x)-b(y))p|\le K|p| |x-y|,$$
by \cite[Chapter II, Theorem 5.6]{BarCap97} we infer that
\begin{equation} \label{2.8}
\beta v(x)-H(x,\gamma)=0,\quad \gamma\in\partial^+ v(x),\quad x\in (0,1).
\end{equation}
As a concave function, $v$ is differentiable on a set $G\subset (0,1)$ with a countable complement $(0,1)\backslash G$. Moreover, $v'$ is continuous and non-increasing on $G$ (see \cite[Theorem 25.2]{Roc70}).
Thus,
\begin{equation} \label{2.9}
\beta v(x)-H(x,v'(x))=0,\quad x\in G.
\end{equation}
Denote by $\delta_*^i$ the least maximum point of $f_i$:
$$ \delta_*^i=\min\left(\arg\max_{u\in [0,\overline \alpha^i]} f_i(u)\right).$$
Let us call a strategy $\alpha$ \emph{static} if it does not depend on $t$.
\begin{assumption} \label{as:1}
The static strategy $\delta_*=(\delta^1_*,\dots,\delta^n_*)$ is not admissible for any $x\in [0,1]$. Equivalently, one can assume that $\tau^{x,\delta_*}<\infty$, or $\max_{x\in [0,1]}b(x)<\sum_{i=1}^n \delta_*^i$.
\end{assumption}
In what follows \emph{we suppose that the Assumption \ref{as:1} is satisfied} without further stipulation.
Denote by
$$ v'_+(x)=\lim_{y\downarrow x}\frac{v(y)-v(x)}{y-x},\quad v'_-(x)=\lim_{y\uparrow x}\frac{v(y)-v(x)}{y-x}$$
the right and left derivatives of $v$. It is well known that $\partial^+v(x)=[v'_+(x),v'_-(x)]$, $x\in (0,1)$ and the set-valued mapping $x\mapsto\partial^+ v(x)$ is non-increasing:
\begin{equation} \label{2.10}
\partial^+ v(x)\ge \partial^+ v(y),\quad x<y.
\end{equation}
For $A,B\subset\mathbb R$ we write $A\le B$ if $\xi\le\eta$ for all $\xi\in A$, $\eta\in B$.
\begin{lemma} \label{lem:4}
Assume that $f_i$ are concave. Then the function $v'$ is strictly decreasing on $G$, and $v$ is strictly concave and strictly increasing.
\end{lemma}
\begin{proof}
To prove that $v$ is strictly concave it is enough to show that $x\mapsto \partial^+ v(x)$ is strictly decreasing:
$$ \partial^+ v(x)>\partial^+ v(y),\quad x<y$$
(see \cite[Chapter D, Proposition 6.1.3]{HirUrrLem01}). Assume that $\partial^+ v(x)\cap\partial^+ v(y)\neq\emptyset$, $x<y$. Then the interval $(x,y)$ contains some points $x_1<y_1$, $x_1,y_1\in G$ such that $v'(x_1)=v'(y_1)$. From (\ref{2.10}) it follows that $v'$ is differentiable on $(x_1,y_1)$ and equals to a constant. Differentiating the HJB equation (\ref{2.9}), we get
$$ \beta v'(x)=b'(x) v'(x),\quad x\in (x_1,y_1).$$
Since $b$ is strictly concave, the equality $b'(x)=\beta$, $x\in (x_1,y_1)$ is impossible. Thus, $v'(x)=0$, $x\in (x_1,y_1)$ and
$$\beta v(x)=\widehat F(0)=
\sum_{i=1}^n f(\delta_*^i),\quad x\in (x_1,y_1).$$
An optimal solution $\alpha^*\in\mathscr A_n(x)$ of the problem (\ref{2.2}) exists (see, e.g., \cite[Theorem 1]{DmiKuz05}).
If $f_i(\alpha_t^{i,*})<f_i(\delta_*^i)=\max_{u\in [0,\overline q^i]}f_i(u)$ on a set of positive measure for at least one index $i$, then
$$ v(x)=J_n(x,\alpha^*)<\sum_{i=1}^n\int_0^\infty e^{-\beta t} f_i(\delta^i_*)\,dt=\frac{1}{\beta}\sum_{i=1}^n f_i(\delta^i_*).$$
If $f_i(\alpha_t^{i,*})=f_i(\delta^i_*)$ a.e., $i=1,\dots,n$, then $\alpha^{i,*}_t\ge \delta_*^i$ a.e. by the definition of $\delta_*$. But this is impossible since the strategy $\delta_*$ is not admissible for $x$ and a fortiori so is $\alpha^*$ (see Lemma \ref{lem:1}).
The obtained contradiction implies that $\partial^+ v$ is strictly decreasing. Hence, $v$ is strictly concave. In view of Lemma \ref{lem:2} this property implies that $v$ is strictly increasing.
\end{proof}
Denote by $g^*(x)=\sup_{y\in\mathbb R}(xy-g(y))$ the Young-Fenchel transform of a function $g:\mathbb R\mapsto (-\infty,\infty]$. Recall (see \cite[Proposition 11.3]{RockWets09}) that for a continuous convex function $g:[a,b]\mapsto\mathbb R$ we have
\begin{equation} \label{2.11}
\partial g^*(x)=\arg\max_{y\in [a,b]}(xy-g(y)).
\end{equation}
The next result establishes a connection between the differentiability of the value function and the optimality of static strategies.
\begin{lemma} \label{lem:5}
Let $f_i$ be concave. If the value function $v$ is not differentiable at $x_0\in (0,1)$, then the static strategy $q_t=b(x_0)\in\mathscr A(x_0)$ is optimal, and $x_0$ is uniquely defined by the ``golden rule": $b'(x_0)=\beta$.
\end{lemma}
\begin{proof}
Assume that $v'_-(x_0)>v'_+(x_0)$, $x_0\in(0,1)$. By (\ref{2.8}) we have
\begin{equation} \label{2.12}
\beta v(x_0)=b(x_0)\gamma+\widehat F(\gamma),\quad \gamma\in (v'_+(x_0),v'_-(x_0)).
\end{equation}
Since
\begin{equation} \label{2.13}
\widehat F(z)=\sup_q\{-zq-(-F(q)\}=(-F)^*(-z),
\end{equation}
by (\ref{2.11}), (\ref{2.12}) we obtain
\begin{equation} \label{2.14}
\{\widehat F'(\gamma)\}=\{-b(x_0)\}=-\arg\max_{q\in [0,\overline q]} (F(q)-\gamma q),\quad \gamma\in (v'_+(x_0),v'_-(x_0)).
\end{equation}
Hence, $\widehat F(\gamma)=F(b(x_0))-b(x_0)\gamma$, $\gamma\in (v'_+(x_0),v'_-(x_0))$ and $b(x_0)\in\mathscr A(x_0)$ is optimal:
$$ \beta v(x_0)=F(b(x_0))=\beta J(x_0,b(x_0)).$$
Now assume that the static strategy $b(x_0)$ is optimal. Let us apply the relations Pontryagin's maximum principle to the stationary solution $(X_t,q_t)=(x_0,b(x_0))$ of (\ref{2.4}). Consider the adjoint equation
\begin{equation} \label{2.15}
\dot\psi(t)=-b'(x_0)\psi(t)
\end{equation}
and the basic relation of the Pontryagin maximum principle:
\begin{equation} \label{2.16}
\psi^0 e^{-\beta t} F(b(x_0))=\max_{q\in [0,\overline q]}\left( \psi^0 e^{-\beta t} F(q)+(b(x_0)-q)\psi(t)\right).
\end{equation}
We have $\psi(t)=Ae^{-b'(x_0)t}$ for some $A\in\mathbb R$. If $(x_0,b(x_0))$ is an optimal solution, then there exist $\psi^0\in\mathbb R_+$, $A\in\mathbb R$ such that $(\psi^0,A)\neq 0$ and the relations (\ref{2.15}), (\ref{2.16}) hold true: see \cite[Theorem 1]{AseKry08}.
Let us rewrite (\ref{2.15}), (\ref{2.16}) as follows
$$ \psi^0 F(b(x_0))=\max_{q\in [0,\overline q]}\left(\psi^0 F(q)+A (b(x_0)-q) e^{(\beta-b'(x_0))t}\right).$$
Assume that $b'(x_0)\neq\beta$. If $\psi^0=0$, then we get a contradiction since $b(x_0)-q$ changes sign on $[0,\overline q]$. Thus, we may assume that $\psi^0=1$:
\begin{align} \label{2.17}
F(b(x_0)) &=A b(x_0) e^{(\beta-b'(x_0))t}+\max_{q\in [0,\overline q]}\left(F(q) -A e^{(\beta-b'(x_0))t} q\right)\nonumber\\
&=H(x_0,z_t),\quad z_t=A e^{(\beta-b'(x_0))t}.
\end{align}
But the equality (\ref{2.17}) is impossible, since either $|z_t|\to\infty$ and $H(x_0,z_t)\to+\infty$, $t\to\infty$, or $|z_t|\to 0$ and
$$ H(x_0,z_t)\to H(x_0,0)=\widehat F(0)=\sum_{i=1}^n f_i(\delta^i_*),\quad t\to\infty.$$
In the latter case by (\ref{2.3}) and (\ref{2.17}) we have
$$ F(b(x_0))=\sum_{i=1}^n f_i(\nu_i)=\sum_{i=1}^n f_i(\delta^i_*)$$
for some $\nu_i\in [0,\overline\alpha^i]$ with $\nu_1+\dots+\nu_n=b(x_0)$. From the definition of $\delta^i_*$ it then follows that $\nu_i\ge\delta^i_*$, $i=1,\dots,n$. This is a contradiction, since $\sum_{i=1}^n \delta^i_*\not\in\mathscr A(x_0)$, and $\sum_{i=1}^n \nu^i=b(x_0)$ should retain this property.
\end{proof}
From the properties of $b$ it follows that either $b'(x)<\beta$, $x\in (0,1)$, or the equation
\begin{equation} \label{2.18}
b'(x)=\beta,\quad x\in (0,1)
\end{equation}
has a unique solution $\widehat x\in (0,1)$.
\begin{theorem} \label{th:1}
Suppose that $f_i$ are concave. Then the value function $v$ is strictly increasing, strictly concave and continuously differentiable on $(0,1)$, except maybe the point $\widehat x$. If $F$ is differentiable at $b(\widehat x)$, then $v$ is continuously differentiable.
\end{theorem}
\begin{proof}
From Lemma \ref{lem:5} it follows that $\widehat x$ is the only possible discontinuity point of $v$. If $v$ is not differentiable at $\widehat x$, then the interval $(v'_+(\widehat x),v'_-(\widehat x))$ is non-empty. But if $F$ is differentiable at $b(\widehat x)$, then (\ref{2.14}) gives a contradiction: $F'(b(\widehat x))=\gamma$ for all $\gamma\in (v'_+(x_0),v'_-(x_0)).$
\end{proof}
Note that the assumption, concerning the existence of $F'(b(\widehat x))$ is not restrictive. Firstly, $F'$ can have only countably many discontinuity points. Thus, $\widehat x$ is not one of these points for all $\beta\in D$, where $(0,\infty)\backslash D$ is countable. Secondly, the formula
\begin{equation} \label{2.18A}
\partial^+ F(q)=\bigcap_{i=1}^n\partial^+f_i(\alpha^i),\quad\sum_{i=1}^n\alpha^i=q,\quad \sum_{i=1}^n f_i(\alpha^i)=F(q)
\end{equation}
(see \cite[Chapter D, Corollary 4.5.5]{HirUrrLem01}) shows that $F'(b(\widehat x))$ exists if any of the functions $f_i$ is differentiable at $\alpha^i$, satisfying (\ref{2.18A}).
The next result shows that the static strategy $q=b(\widehat x)$ is indeed optimal.
\begin{theorem} \label{th:2}
Assume that $f_i$ are concave. A static strategy $b(x_0)\in\mathscr A(x_0)$, $x_0\in (0,1)$ is optimal if and only if $x_0$ coincides with the solution $\widehat x$ of (\ref{2.18}).
\end{theorem}
\begin{proof} The necessity is proved in Lemma \ref{lem:5}. It remains to prove that $b(\widehat x)\in\mathscr A(\widehat x)$ is optimal. If $v$ is not differentiable at $\widehat x$, the result follows from Lemma \ref{lem:4}. Assume that $v$ is continuously differentiable.
The convex function $\widehat F$ is continuously differentiable on a co-countable set $U\subset\mathbb R$. Furthermore, $v$ is twice differentiable a.e., and $v''\le 0$ a.e., since $v'$ is decreasing. Hence, $\widehat F(v'(x))$ is differentiable on the co-countable set $(v')^{-1}(U)=\{x\in (0,1):v'(x)\in U\}$.
Differentiating the HJB equation (\ref{2.9}),
by the chain rule we obtain
$$ (\beta-b'(x))v'(x)=v''(x) \left(b(x)+\widehat F'(v'(x))\right)\quad a.e.$$
The inequalities
$$\beta-b'(x)<0,\quad x\in (0,\widehat x);\quad \beta-b'(x)>0,\quad x\in (\widehat x,1)$$
imply that $v''(x)<0$ a.e. and
\begin{equation} \label{2.19}
b(x)+\widehat F'(v'(x))>0,\quad \textrm{a.e. on}\ (0,\widehat x),\quad
b(x)+\widehat F'(v'(x))<0,\quad \textrm{a.e. on}\ (\widehat x,1).
\end{equation}
Since $v'$ is continuous and strictly decreasing we get the inequalities
$$ b(\widehat x)+\widehat F'_+(v'(\widehat x))\ge 0\ge b(\widehat x)+\widehat F'_-(v'(\widehat x)).$$
Using (\ref{2.11}), (\ref{2.13}), we obtain
\begin{equation} \label{2.20}
b(\widehat x)\in -\partial \widehat F(v'(\widehat x))=\arg\max_{q\in [0,\overline q]}\{F(q)-v'(\widehat x) q\}.
\end{equation}
It follows that the static strategy $q_t=b(\widehat x)\in\mathscr A(\widehat x)$ is optimal:
$$ \beta v(\widehat x)=b(\widehat x) v'(\widehat x)+\widehat F(v'(\widehat x))=F(b(\widehat x)),\quad v(\widehat x)=J(\widehat x,b(\widehat x)).\qedhere$$
\end{proof}
We turn to the analysis of optimal strategies $q\in\mathscr A(x)$ for $x\neq\widehat x$.
Put
\begin{equation} \label{2.21}
\widehat q(z)=-\partial\widehat F(z).
\end{equation}
On the co-countable set $U$, where $\widehat F$ is differentiable, the mapping (\ref{2.21}) is single-valued.
By (\ref{2.20}) we have
$$ \widehat q(v'(x))=\arg\max_{q\in [0,\overline q]} (F(q)-q v'(x)),\quad v'(x)\in U.$$
Note, that $H_z(x,z)=b(x)-\widehat q(z)$, $z\in U$. From (\ref{2.19}) we know that
$$ H_z(x,v'(x))>0,\quad \textrm{a.e. on}\ (0,\widehat x),\qquad
H_z(x,v'(x))<0,\quad \textrm{a.e. on}\ (\widehat x,1).$$
We want to use $\widehat q(v'(x))$ as a \emph{feedback control}, formally considering the equation
$$ \dot X=b(X)-\widehat q(v'(X))=H_z(X,v'(X)),\quad X_0=x.$$
To do it in a rigorous way let us first introduce
$$ \tau^x=\int_x^{\widehat x} \frac{du}{H_z(u,v'(u))}.$$
This definition allows $\tau^x$ to be infinite. Let $x<\widehat x$ (resp., $x>\widehat x$). Then the mapping
$$ \Psi(y)=\int_x^y \frac{du}{H_z(u,v'(u))},\quad \Psi:(x,\widehat x)\mapsto (0,\tau^x)\quad (\textrm{resp.}, \Psi:(\widehat x,x)\mapsto (0,\tau^x))$$
is a bijection.
\begin{lemma} \label{lem:6}
Let $\psi:[a,b]\mapsto\mathbb R$ be continuous and strictly monotonic. Then $\psi^{-1}$ is absolutely continuous if and only if $\psi'\neq 0$ a.e. on $(a,b)$.
\end{lemma}
By Lemma \ref{lem:6}, which proof can be found in \cite{Vill84} (Theorem 2), the equation
\begin{equation} \label{2.22}
t=\int_x^{Y_t} \frac{du}{H_z(u,v'(u))}
\end{equation}
uniquely defines a locally absolutely continuous function $Y_t$, $t\in (0,\tau^x)$. Moreover, $Y$ is strictly increasing if $x<\widehat x$ and strictly decreasing if $x>\widehat x$. From (\ref{2.22}) we get
\begin{equation} \label{2.23}
\dot Y_t=H_z(Y_t,v'(Y_t))=b(Y_t)-\widehat q(v'(Y_t))\quad \textrm{a.e. on}\ (0,\tau^x),\quad Y_0=x.
\end{equation}
\begin{theorem} \label{th:3}
Let $f_i$ be concave and $x\neq\widehat x$. Put $\mathscr T=\{t\in (0,\tau^x):v'(Y_t)\in U\}$, where $Y$ is defined by (\ref{2.22}). Define the strategy
$$q^*_t=\widehat q(v'(Y_t)),\quad t\in\mathscr T.$$
On the countable set $(0,\tau^x)\backslash\mathscr T$ the values $q^*_t$ can be defined in an arbitrary way. If $\tau^x$ is finite put
$$ q^*_t=b(\widehat x),\quad t\ge\tau^x.$$
The strategy $q^*\in\mathscr A(x)$ is optimal.
\end{theorem}
\begin{proof}
The equality (\ref{2.23}) means that $Y_t=X^{x,q^*}$ on $(0,\tau^x)$.
Furthermore, $X^{x,q^*}=\widehat x$ on $[\tau^x,\infty)$ by the definition of $q^*$. Clearly, $q^*$ is admissible.
To prove that $q^*$ is optimal it is enough to show that
$$ W_t=\int_0^t e^{-\beta s} F(q^*_s)\,ds+e^{-\beta t}v(X_t^{x,q^*})$$
is constant, since then
$$ W_0=v(x)=\lim_{t\to\infty} W_t=\int_0^\infty e^{-\beta s} F(q^*_s)\,ds.$$
We have
\begin{align*}
\dot W_t &= e^{-\beta t} F(q^*_t)+e^{-\beta t}\left(-\beta v(X_t^{x,q^*})+v'(X_t^{x,q^*})(b(X_t^{x,q^*})-q^*_t)\right)\\
&= e^{-\beta t}(-\beta v(X_t^{x,q^*})+H(X_t^{x,q^*},v'(X_t^{x,q^*})))=0\quad \textrm{a.e. on}\ (0,\tau^x).
\end{align*}
For $t>\tau^x$ we have
\begin{align*}
W_t &=\int_0^\tau e^{-\beta s} F(q^*_s)\,ds+\frac{F(b(\widehat x))}{\beta}(e^{-\beta\tau}-e^{-\beta t})+e^{-\beta t}v(\widehat x)\\
&=\int_0^\tau e^{-\beta s} F(q_s^*)\,ds +\frac{F(b(\widehat x))}{\beta}e^{-\beta\tau},
\end{align*}
since $v(\widehat x)=F(b(\widehat x))/\beta$ by the optimality of the static strategy $b(\widehat x)$.
\end{proof}
From Theorem \ref{th:3} we see that if the solution $\widehat x$ of (\ref{2.18}) exists, then it attracts any optimal trajectory. Moreover, $X^{x,q^*}$ is strictly increasing (resp., decreasing) on $(0,\tau^x)$, if $x<\widehat x$ (resp. $x>\widehat x$).
We also mention that the multivalued feedback control $\widehat q(v'(x))$ satisfies the inequalities
\begin{equation} \label{2.24}
b(x)>\widehat q(v'(x)),\quad x\in (0,\widehat x);\quad b(x)<\widehat q(v'(x)),\quad x\in (\widehat x,1).
\end{equation}
Indeed, $\widehat q(z)=-\partial F(z)$ is a non-increasing multivalued mapping. On a co-countable set $U$ the mappings $\widehat q(v'(x))$ are single-valued, non-decreasing and satisfy the inequalities (\ref{2.19}). Thus, in any neighbourhood of a point $x\neq\widehat x$ there exist $x_1<x$, $x_2>x$ such that
$$ \widehat q(v'(x_1))\le\widehat q(v'(x))\le\widehat q(v'(x_2)),$$
where $\widehat q(v'(x_i))$ are single-valued and satisfy (\ref{2.19}). It easily follows that
\begin{equation} \label{2.25}
b(x)\ge\widehat q(v'(x)),\quad x\in (0,\widehat x);\quad b(x)\le\widehat q(v'(x)),\quad x\in (\widehat x,1).
\end{equation}
Assume that $b(x_0)\in\widehat q(v'(x_0))$, $x_0\neq\widehat x$. Then from the HJB equation (\ref{2.9}) it follows that $q=b(x_0)\in\mathscr A(x_0)$ is an optimal strategy: $\beta v(x_0)=F(b(x_0))$, in contradiction with Lemma \ref{lem:5}. Thus, the inequalities (\ref{2.25}) are strict.
\section{Cooperative harvesting problem: the case of non-concave revenues}
\label{sec:3}
\setcounter{equation}{0}
Now we drop the assumption that $f_i$ are concave. Let us extend the class of harvesting strategies. A family $(\mu_t(dx))_{t\ge 0}$ of probability measures on $[0,\overline q]$ is called a \emph{relaxed control} if the function
$$t\mapsto \int_0^{\overline q} \varphi(y)\,\mu_t(dy)$$
is measurable for any continuous function $\varphi$. A relaxed control $\mu$ induces the dynamics
$$ X_t=x+\int_0^t b(X_s)\,ds-\int_0^t \int_0^{\overline q} y \mu_s(dy)\,ds.$$
The related value function is defined as follows
\begin{equation} \label{3.1}
v_r(x)=\sup_{\mu\in\mathscr A^r(x)} J^r(x,\mu),\quad
J^r(x,\mu)=\int_0^\infty e^{-\beta t}\int_0^{\overline q} F(y) \mu_t(dy)\,dt,\quad x\in [0,1],
\end{equation}
where $\mathscr A^r=\{\mu:X^{x,\mu}\ge 0\}$ is the class of admissible relaxed controls.
Denote by $\widetilde F$ the concave hull of $F$: $\widetilde F=-(-F)^{**}$. Let
\begin{equation} \label{3.2}
\widetilde v(x)=\sup_{q\in\mathscr A(x)}\widetilde J(x,q),\quad \widetilde J(x,q)=\int_0^\infty e^{-\beta t} \widetilde F(q_t)\,dt
\end{equation}
be the related value function. Note that by (\ref{2.3}) and the properties of infimal convolution (\cite{IofTih79}, Chapter 3, \S\,3.4, Theorem 1) we have
$$ -\widetilde F=(-F)^{**}=(-f_1)^{**}\oplus\dots\oplus(-f_n)^{**}=(-\widetilde f_1)\oplus\dots\oplus(-\widetilde f_n),$$
where $\widetilde f_i$ is the convex hull of $f_i$. Hence,
\begin{equation} \label{3.3}
\widetilde F(q)=\sup\{\widetilde f_1(\alpha_1)+\dots+\widetilde f_n(\alpha_n):\alpha_1+\dots+\alpha_n=q\}.
\end{equation}
Since $\widetilde F\ge F$ it follows that $\widetilde v\ge v$.
By the Jensen inequality we have
$$ J^r(x,\mu)\le \int_0^\infty e^{-\beta t}\int_0^{\overline q} \widetilde F(y) \mu_t(dy)\,dt
\le\int_0^\infty e^{-\beta t} \widetilde F(\widetilde q_t)\,dt,$$
where $q_t=\int_0^{\overline q} y\mu_t(dy)$ is an admissible control for the problem (\ref{2.4}). Thus,
$$ v(x)\le v_r(x)\le \widetilde v(x).$$
\begin{lemma} \label{lem:7}
For any $p\in [0,\overline q]$ there exists $p_1, p_2\in (0,1)$, $\varkappa\in (0,1)$ such that
$$ p=\varkappa p_1+(1-\varkappa) p_2,\quad \widetilde F(p)=\varkappa F(p_1)+(1-\varkappa) F(p_2).$$
\end{lemma}
The proof of a more general result can be found in \cite{HirUrrLem01} (Chapter E, Proposition 1.3.9(ii)).
Denote by $\widetilde q_t$ the strategy, constructed in Theorem \ref{th:3}, where $F$ is replaced by $\widetilde F$.
We claim that
\begin{equation} \label{3.4}
\widetilde F(\widetilde q_t)=F(\widetilde q_t),\quad \textrm{a.e. on } (0,\tau^x).
\end{equation}
By construction, $\widetilde q_t$
is the unique maximum point of $q\mapsto \widetilde F(q)-qv'(Y_t)$ on $[0,\overline q]$ for all $t\in \widetilde{\mathscr T}$, where $(0,\tau^x)\backslash\widetilde{\mathscr T}$ is countable. If $\widetilde F(\widetilde q_t)\neq F(\widetilde q_t)$, $t\in \widetilde{\mathscr T}$ then, by Lemma \ref{lem:7}, $\widetilde F$ is affine in an open neighbourhood of $\widetilde q_t$, and
$$\arg\max_{q\in [0,\overline q]}(\widetilde F(q)-v'(Y_t)q\}$$
contains this neighbourhood: a contradiction.
Furthermore, by Lemma \ref{lem:7} there exist $p_1,p_2\in [0,1]$, $\varkappa\in (0,1)$ such that
\begin{equation} \label{3.5}
b(\widehat x)=\varkappa p_1+(1-\varkappa) p_2,\qquad \widetilde F(b(\widehat x))=\varkappa F(p_1)+(1-\varkappa) F(p_2).
\end{equation}
Consider the static relaxed control
\begin{equation} \label{3.6}
\mu_s=\begin{cases}
\widetilde q_s,& s<\tau^x,\\
\varkappa\delta_{p_1}+(1-\varkappa)\delta_{p_2},& s\ge\tau^x,
\end{cases}
\end{equation}
where $\delta_a$ is the Dirac measure, concentrated at $a$. By (\ref{3.4}), (\ref{3.5}) we have
$$ J^r(x,\mu)=\int_0^{\tau^x} e^{-\beta t} F(\widetilde q_t)\,dt+\int_{\tau^x}^\infty e^{-\beta t} (\varkappa F(p_1)+(1-\varkappa)F(p_2))\,dt=\widetilde J(x,\widetilde q).$$
Thus, $v_r(x)=\widetilde v(x)$ and the strategy (\ref{3.6}) is optimal for the relaxed problem (\ref{3.1}).
To prove that $v_r(x)=v(x)$ let us construct an approximately optimal strategy
\begin{equation} \label{3.7}
q^\varepsilon\in\mathscr A(x): J(x,q^\varepsilon)\to v_r(x),\quad \varepsilon\to 0.
\end{equation}
We may assume that $p_1\neq p_2$ and $p_1<b(\widehat x)< p_2.$
Otherwise, the strategy (\ref{3.6}) reduces to an ordinary control $\mu_s=\widetilde q_s I_{\{s<\tau^x\}}+b(\widehat x) I_{\{s\ge\tau^x\}}$ and we conclude that $v(x)=v_r(x)=\widetilde v(x)$.
Define $g$ by the equation
\begin{align} \label{3.8}
& \int_{\widehat x-\varepsilon}^{\widehat x}(b(\widehat x)-b(x))\rho(x)\,dx =\int_{\widehat x}^{\widehat x+g(\varepsilon)}
(b(x)-b(\widehat x))\rho(x)\,dx,\\
& \rho(x) =\frac{1}{(b(x)-p_1)(p_2-b(x))}.\nonumber
\end{align}
Note, that for sufficiently small $\varepsilon>0$ we have $\rho(x)>0$ on $(\widehat x-\varepsilon,g(\varepsilon))$ and integrands in (\ref{3.8}) are positive. Clearly, $g(\varepsilon)\downarrow 0$, $\varepsilon\to 0$. Put
\begin{align*}
\tau_1 & =\int_{\widehat x}^{\widehat x+g(\varepsilon)}\frac{dx}{b(x)- p_1},\quad
\tau_2=\int_{\widehat x-\varepsilon}^{\widehat x+g(\varepsilon)}\frac{dx}{p_2-b(x)},\\
\tau_3 &=\int_{\widehat x-\varepsilon}^{\widehat x}\frac{dx}{b(x)-p_1},\quad
\tau=\tau_1+\tau_2+\tau_3.
\end{align*}
For brevity, we omit the dependence of $\tau_i$ on $\varepsilon$. Put
\begin{equation} \label{3.9}
q^\varepsilon_t=\sum_{j=0}^\infty \left(p_1 I_{[j\tau,j\tau+\tau_1)}(t)+p_2 I_{[j\tau+\tau_1,j\tau+\tau_1+\tau_2)}(t) + p_1 I_{[j\tau+\tau_1+\tau_2,(j+1)\tau)}(t)\right).
\end{equation}
The trajectory $X^{\widehat x,q^\varepsilon}$ is periodic:
\begin{align*}
\dot X^{\widehat x,q^\varepsilon}_t&=b(X^{\widehat x,q^\varepsilon}_t)- p_1,\quad (j\tau,j\tau+\tau_1),
\quad X^{\widehat x,q^\varepsilon}_{j\tau}=\widehat x,\\
\dot X^{\widehat x,q^\varepsilon}_t&=b(X^{\widehat x,q^\varepsilon}_t)- p_2,\quad (j\tau+\tau_1,j\tau+\tau_1+\tau_2),
\quad X^{\widehat x,q^\varepsilon}_{j\tau+\tau_1}=\widehat x+g^\varepsilon,\\
\dot X^{\widehat x,q^\varepsilon}_t&=b(X^{\widehat x,q^\varepsilon}_t)- p_1,\quad (j\tau+\tau_1+\tau_2,(j+1)\tau)),
\quad X^{\widehat x,q^\varepsilon}_{j\tau+\tau_1+\tau_2}=\widehat x-\varepsilon.
\end{align*}
It sequentially visits the points $\widehat x$, $\widehat x+g^\varepsilon$, $\widehat x-\varepsilon$, $\widehat x$ and moves monotonically between them. Furthermore,
\begin{align*}
\int_{j\tau}^{(j+1)\tau} e^{-\beta t} F(q_t^\varepsilon)\,dt &=\frac{e^{-\beta j\tau}}{\beta}
\left((1-e^{-\beta \tau_1})F(p_1)+
(e^{-\beta \tau_1}-e^{-\beta (\tau_1+\tau_2)})F(p_2)\right.\\
&\left.+(e^{-\beta (\tau_1+\tau_2)}-e^{-\beta \tau})F(p_1)\right)
\end{align*}
Thus,
\begin{align*}
J(\widehat x,q^\varepsilon)&=\frac{1}{\beta(1-e^{-\beta\tau})}\left((1-e^{-\beta \tau_1})F(p_1)+
(e^{-\beta \tau_1}-e^{-\beta (\tau_1+\tau_2)})F(p_2)\right.\\
&\left.+(e^{-\beta (\tau_1+\tau_2)}-e^{-\beta \tau})F(p_1)\right)=\frac{1}{\beta}\left(\frac{\tau_1+\tau_3}{\tau}F(p_1) +\frac{\tau_2}{\tau}F(p_2)\right)+o(1),\quad\varepsilon\to 0.
\end{align*}
Since
$$ \tau_1=\frac{g(\varepsilon)}{b(\widehat x)-p_1}(1+o(1)),\quad \tau_2=\frac{g(\varepsilon)+\varepsilon}{p_2-b(\widehat x)}(1+o(1)),\quad \tau_3=\frac{\varepsilon}{b(\widehat x)-p_1}(1+o(1)),$$
using (\ref{3.5}), we get
$$ \frac{\tau_1+\tau_3}{\tau_2}=\frac{p_2-b(\widehat x)}{b(\widehat x)-p_1}=\frac{\varkappa}{1-\varkappa},$$
$$ \frac{\tau_1+\tau_3}{\tau}=\frac{1}{1+\tau_2/(\tau_1+\tau_3)}=\varkappa,\qquad
\frac{\tau_2}{\tau}=\frac{1}{1+(\tau_1+\tau_3)/\tau_2}=1-\varkappa.$$
Thus,
$$\lim_{\varepsilon\to 0}J(\widehat x,q^\varepsilon)=\frac{1}{\beta}(\varkappa F(p_1)+(1-\varkappa) F(p_2))=\frac{\widetilde F(b(\widehat x))}{\beta}=v(\widehat x).$$
We see that the strategy (\ref{3.9}) satisfies (\ref{3.7}), and $v(x)=v_r(x)=v(x)$. The obtained results are summarized below.
\begin{theorem}
The value functions (\ref{2.2}), (\ref{3.1}), (\ref{3.2}) coincide: $v=v_r=\widetilde v$. By Theorem \ref{th:1}, applied to (\ref{3.2}), $v$ is strictly increasing, strictly concave and continuously differentiable on $(0,1)$, except maybe the point $\widehat x$. If $\widetilde F$ is differentiable at $b(\widehat x)$, then $v$ is continuously differentiable. The strategy (\ref{3.6}) is optimal for the relaxed problem (\ref{3.1}).
\end{theorem}
\section{Rational taxation}
\label{sec:4}
\setcounter{equation}{0}
Assume that a regulator imposes the proportional tax $v'(x)\alpha$ for the fishing intensity $\alpha$. Then the myopic agents take their optimal strategies from the sets
$$ \widehat\alpha^i(x)=\arg\max_{u\in [0,\overline \alpha^i]}\{f_i(u) -v'(x) u\}.$$
The direct implementation of such feedback controls may cause technical problems, since the related equation (\ref{2.1})
can be unsolvable. Instead of continuous change of the tax $v'(X_t)$, a more realistic approach consists in its fixing for some periods of time: $v'(X_{\tau_j})$, $t\in [\tau_j,\tau_{j+1})$. In this case agents also fix their strategies:
$$\alpha^i_{\tau_i}\in\arg\max_{u\in [0,\overline \alpha^i]}\{f_i(u) -v'(X_{\tau_j}) u\},\quad t\in [\tau_j,\tau_{j+1}).$$
This scheme results in ``step-by-step positional control'' (see \cite{KraSub88}), defined recursively by the formulas:
\begin{align}
X^{x,\alpha}_0&=x,\nonumber\\
\alpha_t^i&=\alpha_{\tau_j}^i\in\arg\max_{u\in [0,\overline \alpha^i]}\{f_i(u) -v'(X_{\tau_j}^{x,\alpha}) u\},\quad t\in [\tau_j,\tau_{j+1}),\label{4.1}\\
X_t^{x,\alpha} &=X_{\tau_j}^{x,\alpha}+\int_{\tau_j}^t b(X_s^{x,\alpha})\,ds- \sum_{i=1}^n \alpha^i_{\tau_j}\cdot(t-\tau_j),\quad t\in [\tau_j,\tau_{j+1}),\nonumber\\
0&=\tau_0<\dots\tau_j<\dots,\quad \tau_j\to\infty,\quad j\to\infty, \label{4.2}
\end{align}
bypassing at the same time the mentioned technical problems.
\begin{theorem}
Let $\widetilde F'(\widehat x)$ exist. Then for any $\varepsilon>0$, $\delta>0$ there exists a sequence (\ref{4.2}) such that the strategy (\ref{4.1}) is approximately optimal: $J_n(x,\alpha)\ge v(x)-\varepsilon$ and stabilizing in the following sense:
$$|X_t^{x,\alpha}-\widehat x|<\delta,\quad t\ge \overline t(x,\varepsilon,\delta).$$
\end{theorem}
\begin{proof}
First note that
$$ \widehat\alpha^i(z):=\arg\max_{u\in[0,\overline\alpha^i]}(f_i(u)-zu)\subset
\widetilde\alpha^i(z):=\arg\max_{u\in[0,\overline\alpha^i]}(\widetilde f_i(u)-zu).$$
Indeed, if $u^*\in\widehat\alpha^i(z)$, then $-z\in\partial(-f_i)(u^*)$ and $u^*\in\partial (-f_i)^*(-z)$: see \cite[Chapter E, Proposition 1.4.3]{HirUrrLem01}. But, by (\ref{2.11}),
$$\partial (-f_i)^*(-z)=\arg\max_{u\in[0,\overline\alpha^i]}(-zu-(-f_i)^{**}(u))=\arg\max_{u\in[0,\overline\alpha^i]}(\widetilde f_i(u)-zu)=\widetilde\alpha^i(z).$$
Furthermore, from the representation (\ref{3.3}) we get
$$\max_{q\in[0,\overline q]}\{\widetilde F(q)-zq\}
=\sum_{i=1}^n\max_{\alpha_i\in[0,\overline\alpha^i]}\{\widetilde f_i(\alpha_i)-z\alpha_i\}
$$
(see also (\ref{2.6})). Thus,
\begin{equation} \label{4.3}
\widetilde q(z):=\arg\max_{q\in[0,\overline q]}(\widetilde F(q)-zq)=\sum_{i=1}^n\widetilde\alpha^i(z) \supset\sum_{i=1}^n\widehat\alpha^i(z).
\end{equation}
From (\ref{2.24}) it then follows that
\begin{equation} \label{4.4}
b(x)>\sum_{i=1}^n\widehat\alpha^i(v'(x)),\quad x\in(0,\widehat x),\quad
b(x)<\sum_{i=1}^n\widehat\alpha^i(v'(x)),\quad x\in(\widehat x,1).
\end{equation}
The subsequent argumentation follows the introductory section of \cite{IshKoi00}.
For any $x_0\in(0,1)$ and any $\alpha_0^i\in\widehat\alpha^i(v'(x_0))$ we have
$$\beta v(x_0)=\left(b(x_0)-\sum_{i=1}^n\alpha_0^i\right)v'(x_0)+\sum_{i=1}^n f_i(\alpha_0^i).$$
Put, $$\psi (x,\alpha)=-\beta v(x)+\left(b(x)-\sum_{i=1}^n\alpha^i\right)v'(x)+\sum_{i=1}^n f_i(\alpha^i)$$
and define the time moment
\begin{align}
\tau_1 &=\inf\{t\ge 0:\psi(X_t^{x_0,\alpha_0},\alpha_0)<-\beta\varepsilon\ \textrm{or } X_t^{x_0,\alpha_0}>\widehat x+\delta\}, \quad x_0\in (0,\widehat x),\label{4.5}\\
\tau_1 &=\inf\{t\ge 0:\psi(X_t^{x_0,\alpha_0},\alpha_0)<-\beta\varepsilon\ \textrm{or } X_t^{x_0,\alpha_0}<\widehat x-\delta\}, \quad x_0\in (\widehat x,1),\label{4.6}\\
\tau_1 &=\inf\{t\ge 0:\psi(X_t^{x_0,\alpha_0},\alpha_0)<-\beta\varepsilon\ \textrm{or } X_t^{x_0,\alpha_0}\not\in(\widehat x-\delta,\widehat x+\delta)\},\quad x_0=\widehat x.\label{4.7}
\end{align}
For $t\in [0,\tau_1]$ in each of the cases (\ref{4.5}), (\ref{4.6}), (\ref{4.7}) we have respectively
$$X_t^{x_0,\alpha_0}\in [x_0,\widehat x+\delta],\quad X_t^{x_0,\alpha_0}\in [\widehat x-\delta,x_0],\quad
X_t^{x_0,\alpha_0}\in [\widehat x-\delta,\widehat x+\delta].$$
Assume that $x_{k-1}$, $\alpha_{k-1}$, $\tau_k$ are defined. Put $$x_k=X_{\tau_k}^{x_{k-1},\alpha_{k-1}},\quad \alpha_k^i\in\widehat\alpha^i(v'(x_k)),$$
\begin{align}
\tau_{k+1} &=\inf\{t\ge\tau_k:\psi(X_t^{x_k,\alpha_k},\alpha_k)<-\beta\varepsilon\ \textrm{or } X_t^{x_k,\alpha_k}>\widehat x+\delta\}, \quad x_k\in (0,\widehat x),\label{4.8}\\
\tau_{k+1} &=\inf\{t\ge\tau_k:\psi(X_t^{x_k,\alpha_k},\alpha_k)<-\beta\varepsilon\ \textrm{or } X_t^{x_k,\alpha_k}<\widehat x-\delta\}, \quad x_k\in (\widehat x,1),\label{4.9}\\
\tau_{k+1} &=\inf\{t\ge\tau_k:\psi(X_t^{x_k,\alpha_k},\alpha_k)<-\beta\varepsilon\ \textrm{or } X_t^{x_k,\alpha_k}\not\in(\widehat x-\delta,\widehat x+\delta)\},\quad x_k=\widehat x.\label{4.10}
\end{align}
The function $x\mapsto\psi(x,\alpha)$ is uniformly continuous on any interval $[a,b]\subset(0,1)$ uniformly in $\alpha\in [0,\overline q]$. Thus, there exists $\delta'$ such that if
$$|\psi(x,\alpha)-\psi(y,\alpha)|\ge\beta\varepsilon,\quad [x,y]\subset [a,b],$$
then $|x-y|\ge\delta'$. Assume that $\psi(X_{\tau_{k+1}}^{x_k,\alpha_k},\alpha_k)=-\beta\varepsilon$. Since $\psi(x_k,\alpha_k)=0$, we get
$$\delta'\le |X_{\tau_{k+1}}^{x_k,\alpha_k}-x_k|\le\int_{\tau_k}^{\tau_{k+1}} b(X_t^{x_k,\alpha_k})\,dt+\int_{\tau_k}^{\tau_{k+1}}\sum_{i=1}^n\alpha_k^i\,dt \le(\overline b+\overline q)(\tau_{k+1}-\tau_k),$$
where $\overline b=\max_{x\in [0,1]} b(x)$.
Furthermore, if $\psi(X_{\tau_{k+1}}^{x_k,\alpha_k})>-\beta\varepsilon$ and $\tau_{k+1}<\infty$, then in any of three cases (\ref{4.8}), (\ref{4.9}), (\ref{4.10}) we have
$$\delta\le |X_{\tau_{k+1}}^{x_k,\alpha_k}-x_k|\le (\overline b+\overline q)(\tau_{k+1}-\tau_k).$$
Thus, the differences $\tau_{k+1}-\tau_k$ are uniformly bounded from below by a positive constant, and the strategy
$\alpha=\sum_{k=0}^\infty\alpha_k I_{[\tau_k,\tau_{k+1})}(t)$ is well defined for all $t\ge 0$. Note, that $X^{x_0,\alpha}_t$ belongs to one of the sets $[x_0,\widehat x+\delta]$, $[\widehat x-\delta,x_0]$, $[\widehat x-\delta,\widehat x+\delta]$ for all $t\ge 0$.
By the Berge maximum theorem (see \cite[Theorem 17.31]{AliBor06}) the set-valued mapping $\widehat\alpha$ is upper hemicontinuous, hence its graph is closed (see \cite[Theorem 17.10]{AliBor06}). From (\ref{4.4}) it then follows that there is a finite gap between $b(x)$ and $\sum_{i=1}^n\widehat\alpha^i(v'(x))$ on $(0,\widehat x-\delta)\cup(\widehat x+\delta,1)$. Thus, $|\dot X^{\alpha,x_0}|$ is uniformly bounded from below by a positive constant, when $X^{\alpha,x_0}\in (0,\widehat x-\delta)\cup(\widehat x+\delta,1)$. This property implies that $X^{\alpha,x_0}$ reaches the neighbourhood $[\widehat x-\delta,\widehat x+\delta]$ in finite time $\overline t(x,\varepsilon,\delta)$. After reaching this neighbourhood, $X^{\alpha,x_0}$ remains in it forever by the construction of $\alpha$.
It remains to prove that $\alpha$ is $\varepsilon$-optimal. We have
$$ -\beta v(X_t^{x_k,\alpha_k})+\left(b(X_t^{x_k,\alpha_k})-\sum_{i=1}^n\alpha_k^i\right)v'(X_t^{x_k,\alpha_k})+\sum_{i=1}^n f_i(\alpha_k^i)\ge-\beta\varepsilon,\quad t\in (\tau_k,\tau_{k+1}).$$
After the multiplication on $e^{-\beta t}$ an integration we get
$$ e^{-\beta\tau_{k+1}} v(X_{\tau_{k+1}}^{x_k,\alpha_k})-e^{-\beta\tau_k} v(X_{\tau_k}^{x_k,\alpha_k})+\int_{\tau_k}^{\tau_{k+1}} e^{-\beta t}\sum_{i=1}^n f_i(\alpha_k^i)\,dt\ge\varepsilon(e^{-\beta\tau_{k+1}}-e^{-\beta\tau_k}).$$
Summing up and passing to the limit we obtain the desired inequality:
$$ \int_0^\infty e^{-\beta t}\sum_{i=1}^n f_i(\alpha_t^i)\,dt\ge v(x_0)-\varepsilon. \qedhere$$
\end{proof}
As an example, consider the problem with $n$ identical agents and assume that their common profit function is linear: $f_i(u)=f(u)=u$, $u\in[0,\overline\alpha]$. The HJB equation (\ref{2.9}) takes the form
$$\beta v(x)=b(x)v'(x)+ n \max_{u\in [0,\overline\alpha]}(u-v'(x)u).$$
From (\ref{2.20}) it follows that $v'(\widehat x)=1$. Thus,
\begin{equation} \label{4.11}
v'(x)>1,\quad x<\widehat x,\quad v'(x)<1,\quad x>\widehat x
\end{equation}
and $v$ satisfies the equations
$$ \beta v(x)=b(x)v'(x),\quad x<\widehat x;\qquad \beta v(x)=(b(x)-n\overline\alpha)v'(x)+ n\overline\alpha,\quad x>\widehat x.$$
Solving these equations, by the uniqueness result, given in Lemma \ref{lem:3}, we infer that
$$ v(x)=\frac{b(\widehat x)}{\beta}\exp\left(-\int_x^{\widehat x}\frac{\beta}{b(y)}\,dy\right),\quad x\in (0,\widehat x],$$
$$ v(x)=\frac{1}{\beta}(b(\widehat x)-n\overline\alpha)\exp\left(\int_{\widehat x}^x\frac{\beta}{b(y)-\overline\alpha n}\,dy\right)+\frac{1}{\beta} n\overline \alpha,\quad x\in [\widehat x,1].$$
For the biomass quantities $x$ below the critical level $\widehat x$ the tax $v'(x)$ does not depend on $n$:
$$ v'(x)=\frac{b(\widehat x)}{b(x)}\exp\left(-\int_x^{\widehat x}\frac{\beta}{b(y)}\,dy\right),\quad x\in (0,\widehat x].$$
For larger values of $x$ we have
$$ v'(x)=\frac{n\overline\alpha-b(\widehat x)}{n\overline\alpha-b(x)}\exp\left(-\int_{\widehat x}^x\frac{\beta}{n\overline\alpha-b(y)}\,dy\right),\quad x\in [\widehat x,1].$$
In particular, $v'(x)\to f'(0)=1$, $n\to\infty$.
Note, that a tax, stimulating an optimal cooperative behavior is by no means unique. For instance, any tax, satisfying (\ref{4.11}), can serve this purpose. So, the most interesting quantity is the ``critical tax''
\begin{equation} \label{4.12}
v'(\widehat x)=\widetilde F'(b(\widehat x)).
\end{equation}
The equality (\ref{4.12}) follows from (\ref{2.20}). Consider
$\widetilde F$ as the value function of the elementary problem (\ref{3.3}),
where the artificial agents with concave revenues $\widetilde f_i$ cooperatively distribute some given harvesting intensity $q$. Formula (\ref{4.12}) shows that $v'(\widehat x)$ is simply the shadow price of the critical growth growth rate $b(\widehat x)$ within this problem.
We are interested in the dependence of the critical tax $v'(\widehat x)$ on the size of agent community. Consider again $n$ identical agents with the revenue functions $f_i=f$. If $f$ is linear, the critical tax, as we have seen, does not depend on $n$. Assume now that $f$ is differentiable and strictly concave. Then by (\ref{2.20}) and (\ref{4.3}) we get
$$b(\widehat x)\in \sum_{i=1}^n\arg\max_{u\in[0,\overline\alpha]}\{f(u)-v'(\widehat x)u\}$$
Taking optimal values of $u$ to be equal, we conclude that
$v'(\widehat x)=f'(b(\widehat x)/n)$. Thus, $v'(\widehat x)$ is increasing in $n$, and $v'(\widehat x)\to f'(0)$, $n\to\infty$. Our final result shows that this situation is typical: the critical tax can only increase, when the agent community widens.
\begin{theorem}
Denote by $F_n$, $F_{n+m}$ and $v_n$, $v_{n+m}$ the cooperative instantaneous revenue functions (\ref{2.3}) and the value functions (\ref{2.2}), corresponding to the agent communities
$$ \{f_i\}_{i=1}^n\subset\{f_i\}_{i=1}^{n+m}.$$
Assume that $\widetilde F'_n(b(\widehat x))$, $\widetilde F'_{n+m}(b(\widehat x))$ exist. Then
$$ v'_n(\widehat x)=\widetilde F'_n(b(\widehat x))\le v'_{n+m}(\widehat x)=\widetilde F'_{n+m}(b(\widehat x)).$$
\end{theorem}
\begin{proof}
It is enough to consider the case $m=1$. By the associativity of the infimal convolution we have
$$ (-\widetilde F_{n+1})(q)=(-\widetilde F_n)\oplus(-\widetilde f_{n+1})(q).$$
The formula for the subdifferential of an infimal convolution, given in \cite[Chapter D, Corollary 4.5.5]{HirUrrLem01}, implies that
$$ \partial(-\widetilde F_{n+1})(q)\subseteq\bigcup_{u}\partial (-\widetilde F_n)(u)\cap\partial(-\widetilde f_{n+1})(q-u)\subseteq\bigcup_{u\in [0,q]}\partial (-\widetilde F_n)(u).$$
But since the set-valued mapping $u\mapsto\partial (-\widetilde F_{n+1})(u)$ is non-decreasing, we have
$$ \partial(-\widetilde F_{n+1})(q)\le\partial (-\widetilde F_n)(q),\quad q\in [0,\overline q].$$
Thus, $\widetilde F'_{n+1}(b(\widehat x))\ge \widetilde F'_n(b(\widehat x))$.
\end{proof}
A resembling result for discrete time problem was proved in \cite[Theorem 3]{Rok00}.
\bibliographystyle{plain}
\bibliography{litFish}
\end{document} | {"config": "arxiv", "file": "1602.07123/Fish5.tex"} |
TITLE: 3 year structured deposit
QUESTION [0 upvotes]: A bank has launched a three year structured deposit that offers an effective rate of interest of $8$% perannumfor the first $18$ months, $1.5$% per quarter for the next 6 months and $2$% per half year for the last 12months. If I wish to accumulate $100,000$ on the maturity date how much should I invest?
$100,000=X[(1.08)^{1.5}+(1.08)^{1.5}(1.00375)^2+(1.08)^{1.5}(1.00375)^2(1.01)^2]$
Solving for $X$ does not give the answer.
REPLY [1 votes]: FV of the investment X at the end of 18 months:
$$FV_{18}= X(1.08)^{1.5}$$
This becomes the present value for the next period with different interest rate and for the next 6 months
$$FV_{24} = (X(1.08)^{1.5})\times(1.015)^2$$
$$FV_{36} = ((X(1.08)^{1.5})\times(1.015)^2)\times (1.02)^2$$
Thus $FV_{36} = 100,000$, Then find $X = 83,125$ | {"set_name": "stack_exchange", "score": 0, "question_id": 1896856} |
TITLE: probabilty that the students will pass
QUESTION [0 upvotes]: The University of Metropolis requires its students to pass an examination in college-level mathematics before they can graduate. The students are given three chances to pass the exam; 61% pass it on their first attempt, 64% of those that take it a second time pass it then, and 47% of those that take it a third time pass it then. (Assume that all students who do not pass the first or second time elect to take the test again.)
What percent of the students take the test three times? (Round your answer to one decimal place.)
so basically I have to figure out what the percentage is of kids that must take the test three times. It seems to me that I have insufficient information, because I don't know how many kids take each test.
REPLY [1 votes]: Here's a decision tree. Can you see which value is the percentage of people who failed the first two tests? | {"set_name": "stack_exchange", "score": 0, "question_id": 1247655} |
{\bf Problem.} Let $x_1,$ $x_2,$ $\dots,$ $x_n$ be nonnegative real numbers such that $x_1 + x_2 + \dots + x_n = 1$ and
\[x_1^2 + x_2^2 + \dots + x_n^2 \le \frac{1}{100}.\]Find the smallest possible value of $n.$
{\bf Level.} Level 3
{\bf Type.} Intermediate Algebra
{\bf Solution.} By QM-AM,
\[\sqrt{\frac{x_1^2 + x_2^2 + \dots + x_n^2}{n}} \ge \frac{x_1 + x_2 + \dots + x_n}{n}.\]Then
\[\frac{1}{n} \le \sqrt{\frac{x_1^2 + x_2^2 + \dots + x_n^2}{n}} \le \sqrt{\frac{1}{100n}}.\]Hence,
\[\frac{1}{n^2} \le \frac{1}{100n},\]and $n \ge 100.$
For $n = 100,$ we can take $x_i = \frac{1}{100}$ for all $i,$ so the smallest such $n$ is $\boxed{100}.$ | {"set_name": "MATH"} |
TITLE: Is this measure called anything?
QUESTION [1 upvotes]: Well, I do not know if this is formally a measure, but imagine $f(\mathbf v_1, \mathbf v_2)$:
$$f(\mathbf v_1, \mathbf v_2) = \|\mathbf v_1 - \mathbf v_2\|_{\ell_0}$$
where $\mathbf v_1, \mathbf v_2$ are two vectors of equal size and $\|\cdot\|_{\ell_0}$ is the $l_0$ "pseudo-norm", i.e. $\|\mathbf v\|_{\ell_0} = |v_1|^0 + |v_2|^0 + \ldots + |v_\text{last}|^0$.
This "measure" counts the number of non-zero entries in the difference $\mathbf v_1 - \mathbf v_2$ - the number of positions in which the two vectors differ.
Does this have a known name? The reason I am asking is that it is similar to Hamming distance but operates on vector entries rather than individual bits.
REPLY [1 votes]: I am having trouble finding a reference. However, this "norm" is indeed well known, and it is indeed referred to as either the "zero-norm" or the "$L0$-norm". Note that this norm obeys the separation and triangle inequality conditions of a norm, but fails in absolute homogeneity.
This "norm" gets a lot of use in the context of compressed sensing and sparse recovery, a field with a lot of active research. | {"set_name": "stack_exchange", "score": 1, "question_id": 1601913} |
TITLE: Does there exist a continous surjective map between $\mathbb R^2\to S^1$
QUESTION [3 upvotes]: Does there exist a continuous surjective map between $\mathbb R^2\to S^1$?
I had find such map with domain $\mathbb R^2/${$0$}.
But I do not able to find if we insert 0 there?
Any help will be appreciated
REPLY [6 votes]: How about $f(x,y)=(\cos x,\sin x)$?
REPLY [2 votes]: R^2 to R projection and then from R to circle the exponential map.
REPLY [1 votes]: Take $\pi:\mathbb{R}^2\to \mathbb{R}$ given by $\pi(x,y)=x$, followed by $\phi:\mathbb{R}\to S^1$ given by $\phi(t)=e^{2\pi i t}$. Both of these maps are continuous and it's easy to see that their composition yields a surjection. So, $\phi\circ \pi:\mathbb{R}^2\to S^1$ is a continuous surjection. | {"set_name": "stack_exchange", "score": 3, "question_id": 3063495} |
TITLE: A question about a special Geometric sequence
QUESTION [0 upvotes]: Let $z$ be an element on unit circle in complex plane. Then we have series $\Sigma_{i=0}^{i=+\infty} z^{i}$. when does this series convergent?
$\Sigma _{i=0}^{+\infty} z^{i}=lim_{n\rightarrow \infty} \frac{1-z^{n}}{1-z}$. I don't know how to move on from this step.
REPLY [0 votes]: Let $z = e^{i2\pi\theta}$
and suppose $\theta \in [0,1)\cap \mathbb{Q}$.
Thus let say $\theta = p/q$ where $p,q \in \mathbb{N}$ with $p < q$, $\gcd(p,q)=1$ and $q \ne 0$.
Then
$$ z^q = e^{i2\pi\theta q} = e^{i2\pi p} = 1 $$
Therefore, for any $n$,
\begin{equation}
z^n = \begin{cases}
1 & \text{if $n \equiv 0$ mod $q$} \\
e^{i2\pi r} & \text{if $n \equiv r$ mod $q$}
\end{cases}
\end{equation}
Thus
$\frac{1-z^n}{1-z}$ could possibly have at most $q$ values depends on $\theta$ and $n$.
Therefore, the limit does not exist.
Then $\theta$ is irrational, one can show that $\{z^n| n \in \mathbb{N}\} =$ Unit circle. | {"set_name": "stack_exchange", "score": 0, "question_id": 2549554} |
\begin{document}
\maketitle
\begin{abstract}
This paper shows a mathematical formalization, algorithms and computation software of volume optimal cycles, which are useful to understand geometric features
shown in a persistence diagram.
Volume optimal cycles give us concrete and optimal homologous structures,
such as rings or cavities, on a given data.
The key idea is the optimality on $(q + 1)$-chain complex for a $q$th homology generator. This optimality formalization is suitable for persistent homology. We can solve the optimization problem using linear programming.
For an alpha filtration on $\R^n$, volume optimal cycles on an $(n-1)$-th persistence diagram is more efficiently computable
using merge-tree algorithm.
The merge-tree algorithm also gives us a tree structure on the diagram and the structure has richer information. The key mathematical idea is Alexander duality.
\end{abstract}
\section{Introduction}
Topological Data Analysis (TDA)~\cite{carlsson,eh}, which clarifies the geometric features
of data from the viewpoint of topology, is developed rapidly in this century
both in theory and application. In TDA, persistent homology and its persistence
diagram (PD) \cite{elz,zc} are
important tool for TDA. Persistent homology enables us to capture
multiscale topological features effectively and quantitatively.
Fast computation softwares for persistent homology
are developed \cite{dipha,phat} and many applications are achieved such as
materials science \cite{Hiraoka28062016,granular,PhysRevE.95.012504},
sensor networks \cite{sensor}, evolutions of virus~\cite{virus}, and so on.
From the viewpoint of data analysis, a PD has some significant properties:
translation and rotation invariance, multiscalability and robustness to noise.
PDs are considered to be compact descriptors for complicated geometric data.
$q$th homology $H_q$ encodes $q$ dimensional geometric structures of data
such as connected components ($q=0$), rings ($q=1$), cavities ($q=2$), etc.
$q$th persistent homology encodes the information
about $q$ dimensional geometric structures with their scale.
A PD, a multiset\footnote{A multiset is a set with multiplicity on each point.}
in $\R\times(\R \cup \{\infty\})$, is used to summarize the information.
Each point in a PD is called a birth-death pair, which represents a homologous
structure in the data, and the scale is encoded on x-axis and y-axis.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.5\hsize]{amorphous.pdf}
\caption{The 1st PD for the atomic configuration of amorphous silica
in \cite{Hiraoka28062016},
reproduced from the simulation data. The data is provided by Dr. Nakamura.
}
\label{fig:amorphous-silica}
\end{figure}
Typical workflow of the data analysis with persistent homology is as follows:
\begin{enumerate}
\item Construct a filtration from data
\begin{itemize}
\item Typical input data is a point cloud, a finite set of points in $\R^n$ and
a typical filtration is an alpha filtration
\end{itemize}
\item Compute the PD from the filtration
\item Analyze the PD to investigate the geometric features of the data
\end{enumerate}
In the last part of the above workflow, we often want to inversely reconstruct
a geometric structure corresponding each birth-death pair on the PD,
such as a ring or a cavity,
into the original input data.
Such inverse analysis is practically important for the
use of PDs. For example, we consider the 1st PD shown in Fig.~\ref{fig:amorphous-silica}
from the atomic configuration of amorphous silica
computed by molecular dynamical simulation
\cite{Hiraoka28062016}.
In this PD, there are some characteristic bands $C_P, C_T, C_O, B_O$, and these
bands correspond to typical geometric structures in amorphous silica.
To analyze the PD more deeply, we want to reconstruct rings corresponding
such birth-death pairs in the original data. In the paper,
optimal cycles, one of such inverse analysis methods, are effectively used
to clarify such typical structures.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize]{optcyc_one_hole.pdf}
\caption{A simplicial complex with one hole.}
\label{fig:optcyc_one_hole}
\end{figure}
A representative cycle of a generator of the homology vector space has such information,
but it is not unique and we want to find a better cycle
to understand the homology generator
for the analysis of a PD.
For example, Fig.~\ref{fig:optcyc_one_hole}(a) has one homology generator on
$H_1$, and cycles $z_1$, $z_2$, and $z_3$ shown in Fig.~\ref{fig:optcyc_one_hole}
(b), (c), and (d)
are the same homologous information.
However, we consider that $z_3$ is best to understand the homology.
Optimization problems on homology are used to find such a representative cycle.
We can find the ``tightest'' representative cycle under a certain formalization.
Such optimization problems have been widely studied
under various settings\cite{optimal-Day,Erickson2005,Chen2011}, and
two concepts, optimal cycles\cite{Escolar2016} and
volume optimal cycles\cite{voc}, have been successfully
applied to persistent homology.
The optimal cycle minimizes the size of the cycle, while the volume optimal
cycle minimizes the internal volume of the cycle. Both of these two methods
give a tightest cycle in different senses.
The volume optimal cycles for persistent homology have been proposed in
\cite{voc} under the restriction of dimension. We can use them only for
$(n-1)$-th persistent homology embedded in $\R^n$, but
under the restriction,
there is an efficient computation algorithm using Alexander duality.
In this paper, we generalize the concept of volume optimal cycles on
any persistent homology and show the computation algorithm.
The idea in \cite{voc} is not applied to find a volume optimal ring
(a volume optimal cycle for $q=1$)
in a point cloud in $\R^3$ but our method is applicable to such a case.
In that case, optimal cycles are also applicable, but our new algorithm is
simpler, faster for large data, and gives us better information.
The contributions of this paper are as follows:
\begin{itemize}
\item The concept of volume optimal cycles is proposed to identify
good representatives of generators in persistent homology.
This is useful to understand a persistence diagram.
\begin{itemize}
\item The concept has been already proposed in \cite{voc} in a strongly limited sense
about dimension and this paper
generalize it.
\item Optimal cycles are also usable for the same purpose, but the algorithm
in this paper is easier to implement, faster, and gives
better information.
\begin{itemize}
\item Especially, children birth-death pairs shown in Section \ref{sec:compare}
are available only with volume optimal cycles.
\end{itemize}
\end{itemize}
\item Mathematical properties of volume optimal cycles are clarified.
\item Effective computation algorithms for volume optimal cycles are proposed.
\item The algorithm is implemented and some examples are computed by
the program to show the usefulness of volume optimal cycles.
\end{itemize}
The rest of this paper is organized as follows. The fundamental ideas such as
persistent homology and simplicial complexes are introduced in Section~\ref{sec:ph}.
In Section~\ref{sec:oc} the idea of optimal cycles is reviewed.
Section~\ref{sec:voc} is the main part
of the paper. The idea of volume optimal cycles and the computation algorithm
in a general setting are presented in Section~\ref{sec:voc}.
Some mathematical properties of volume optimal cycles are also shown in this section.
In Section~\ref{sec:vochd} we show some special properties of
$(n-1)$-th persistent homology in $\R^n$ and the faster algorithm.
We also explain tree structures in $(n-1)$-th persistent homology.
In Section~\ref{sec:compare}, we compare volume optimal cycles and optimal cycles.
In Section~\ref{sec:example} we show some computational examples by the proposed
algorithms. In Section~\ref{sec:conclusion}, we conclude the paper.
\section{Persistent homology}\label{sec:ph}
In this section, we explain some preliminaries about persistent homology
and geometric models. Persistent homology is available on
various general settings, but we mainly focus on the persistent homology
on a filtration of simplicial complexes, especially an alpha filtration
given by a point cloud.
\subsection{Persistent homology}
Let $\X = \{X_t \mid t \in T\}$ be a \textit{filtration} of topological spaces
where $T$ is a subset of $\Zint$ or $\R$.
That is, $X_t \subset X_{t'}$ holds for every $t \leq t'$.
Then we define $q$th homology
vector spaces $\{H_q(X_t)\}_{t \in T}$ whose coefficient is a field $\Bbbk$ and
homology maps $\varphi_s^t : H_q(X_s) \to H_q(X_t)$ for all $s \leq t$ induced by
inclusion maps $X_s \xhookrightarrow{} X_t$.
The family $H_q(\X) = (\{H_q(X_t)\}_t, \{ \varphi_s^t\}_{s \leq t})$
is called the $q$th \textit{persistent homology}.
The theory of persistent homology enables us to analyze the
structure of this family.
Under some assumptions,
$H_q(\X)$ is uniquely decomposed as follows~\cite{elz,zc}:
\begin{align*}
H_q(\X) = \bigoplus_{i=1}^p I(b_i, d_i),
\end{align*}
where $b_i \in T, d_i \in T \cup \{\infty\}$ with $b_i < d_i$.
Here, $I(b, d) = (U_t, f_s^t)$
consists of a family of vector spaces and linear maps:
\begin{align*}
U_t&=\left\{
\begin{array}{ll}
\Bbbk, &\mbox{if } b \leq t < d, \\
0, & \mbox{otherwise},
\end{array}
\right. \\
f_s^t&:U_s \to U_t \\
f_s^t&=\left\{
\begin{array}{ll}
\textrm{id}_\Bbbk, &\mbox{if } b \leq s \leq t < d, \\
0, & \mbox{otherwise}.
\end{array}
\right.
\end{align*}
This means that for each $I(b_i, d_i)$ there is
a $q$ dimensional hole in $\X$ and it appears at $t = b_i$, persists up to $t < d_i$ and
disappears at $t = d_i$. In the case of $d_i = \infty$,
the $q$ dimensional hole never disappears on $\X$.
$b_i$ is called a \textit{birth time}, $d_i$ is called a
\textit{death time}, and the pair $(b_i, d_i)$ is called a \textit{birth-death pair}.
When $\X$ is a filtration of finite simplicial/cell/cubical complexes on $T$
with $\#T < \infty$ (we call $\X$ a \textit{finite filtration} under the condition),
such a unique decomposition exists.
When we have the unique decomposition,
the $q$th \textit{persistence diagram} of $\X$, $D_q(\X)$, is defined by a multiset
\begin{align*}
D_q(\X) = \{(b_i, d_i) \mid i=1,\ldots, p\},
\end{align*}
and the 2D scatter plot or the 2D histogram of $D_q(\X)$ is often used to visualize
the diagram.
We investigate the detailed
algebraic structure of persistent homology for the preparation.
For simplicity, we assume the following condition on $\X$.
\begin{cond}\label{cond:ph}
Let $X = \{\sigma_1, \ldots, \sigma_K\}$ be a finite simplicial complex.
For any $1 \leq k \leq K$, $X_k = \{\sigma_1, \ldots, \sigma_k\}$ is
a subcomplex of $X$.
\end{cond}
Under the condition,
\begin{align}
\X: \emptyset = X_0 \subset X_1 \subset \cdots \subset X_K = X,\label{eq:ph}
\end{align}
is a filtration of complexes. For a general finite filtration, we can construct
a filtration satisfying Condition~\ref{cond:ph} by ordering all simplices properly.
Let $\partial_q: C_q(X) \to C_{q-1}(X)$ be the
boundary operator on $C_q(X)$ and
$\partial_q^{(k)}: C_q(X_k) \to C_{q-1}(X_k)$ be a boundary operator of $C_q(X_k)$.
Cycles $Z_q(X_k)$ and boundaries $B_q(X_k)$ are defined by the
kernel of $\partial_q^{(k)}$ and the image of $\partial_{q+1}^{(k)}$, and $q$th homology
vector spaces are defined by $H_q(X_k) = Z_q(X_k)/B_q(X_k)$. Condition~\ref{cond:ph}
says that if $\sigma_k $ is a $q$-simplex,
\begin{equation}
\label{eq:chain_plus1}
\begin{aligned}
C_q(X_{k}) & = C_q(X_{k-1})\oplus\left<\sigma_k\right>, \\
C_{q'}(X_{k}) & = C_{q'}(X_{k-1}), \mbox{ for $q' \not = q$},
\end{aligned}
\end{equation}
holds.
From the decomposition theorem and \eqref{eq:chain_plus1},
for each birth-death pair $(b_i, d_i)$,
we can find $z_i \in C_q(X)$ such that
\begin{align}
&z_i \not \in Z_q(X_{b_i-1}), \label{eq:birth_pre}\\
&z_i \in Z_q(X_{b_i}) = Z_q(X_{b_i-1}) \oplus \left<\sigma_{b_i}\right>, \label{eq:birth_post}\\
&z_i \not \in B_q(X_{k}) \mbox{ for $k < d_i$}, \label{eq:death_pre} \\
&z_i \in B_q(X_{d_i}) = B_q(X_{d_i-1}) \oplus \left<\partial \sigma_{d_i}\right>, \label{eq:death_post} \\
&\{[z_i]_k \mid b_i \leq k < d_i\} \text{ is a basis of } H_q(X_k), \label{eq:ph-basis}
\end{align}
where $[z]_k = [z]_{B_q(X_k)} \in H_q(X_k)$.
\eqref{eq:death_post} holds only if $d_i \not = \infty$. This $[z_i]_k$ is a
homology generator that persists from $k={b_i}$ to $k = {d_i-1}$.
$\{z_i\}_{i=1}^p$ is called the \textit{persistence cycles} for
$D_p(\X) = \{(b_i, d_i)\}_{i=1}^p$.
An algorithm of computing a PD actually finds persistence cycles from a given
filtration.
The persistence cycle of $(b_i, d_i)$ is not unique, therefore
we want to find a ``good'' persistence cycle
to find out the geometric structure corresponding to each birth-death pair.
That is the purpose of
the volume optimal cycle, which is the main topic of this paper.
We remark that the condition \eqref{eq:ph-basis} can be easily proved from
(\ref{eq:birth_pre}-\ref{eq:death_post}) and the decomposition theorem,
and
hence we only need to show (\ref{eq:birth_pre}-\ref{eq:death_post}) to prove
that given $\{z_i\}_{i=1}^p$ are persistence cycles.
\subsection{Alpha filtration}
One of the most used filtrations for data analysis using persistent homology
is an alpha filtration~\cite{eh, em}. An alpha filtration is defined from a point cloud,
a set of finite points $P = \{x_i \in \R^n\}$.
The alpha filtration is defined as a filtration of alpha complexes and they are
defined by a Voronoi diagram and a Delaunnay triangulation.
The \textit{Voronoi diagram} for a point cloud $P$, which is a decomposition of $\R^n$ into
\textit{Voronoi cells} $\{V(x_i) \mid x_i \in P\}$, is defined by
\begin{align*}
V(x_i) = \{x \in \R^n \mid \|x - x_i\|_2 \leq \|x - x_j\|_2 \text{ for any } j\not = i\}.
\end{align*}
The \textit{Delaunnay triangulation} of $P$, $\del(P)$, which is a simplicial complex
whose vertices are points in $P$, is defined by
\begin{align*}
\del(P) = \{[x_{i_1} \cdots x_{i_q}] \mid
V(x_{i_1}) \cap \cdots \cap V(x_{i_q}) \not = \emptyset\},
\end{align*}
where $[x_{i_0} \cdots x_{i_q}]$ is the $q$-simplex whose vertices are
$x_{i_0}, \ldots, x_{i_q} \in P$.
Under the assumption of general position in the sense of \cite{em},
the Delaunnay triangulation is a simplicial decomposition of
the convex hull of $P$ and it has good geometric properties.
The \textit{alpha complex} $\alp(P, r)$ with radius parameter $r \geq 0$,
which is a subcomplex of $\del(P)$, is defined as follows:
\begin{align*}
\alp(P, r) = \{[x_{i_0} \cdots x_{i_q}] \in \del(P) \mid
B_r(x_{i_0}) \cap \cdots \cap B_r(x_{i_q}) \not = \emptyset \},
\end{align*}
where $B_r(x)$ is the closed ball whose center is $x$ and whose radius is $r$.
A significant property of the alpha complex is the following homotopy equivalence
to the $r$-ball model.
\begin{align*}
\bigcup_{x_i \in P} B_r(x_i) \simeq |\alp(P, r)|,
\end{align*}
where $|\alp(P,r)|$ is the geometric realization of $\alp(P, r)$.
The \emph{alpha filtration} for $P$ is defined by $\{\alp(P,r)\}_{r\geq 0}$.
Figure~\ref{fig:alpha} illustrates an example of a filtration by $r$-ball model
and the corresponding alpha filtration. The 1st PD of this filtration is
$\{(r_2, r_5), (r_3, r_4)\}$.
Since there are $r_1 < \cdots < r_K$ such
that $\alp(P, s) = \alp(P, t)$ for any $r_i \leq s < t < r_{i+1}$, we can
treat the alpha filtration as a finite filtration
$\alp(P, r_1) \subset \cdots \subset \alp(P, r_K)$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\hsize]{pc-alpha.pdf}
\caption{An $r$-ball model and the corresponding alpha filtration. Each red simplex
in this figure appears at the radius parameter $r_i$.
}
\label{fig:alpha}
\end{figure}
We mention an weighted alpha complex and its filtration~\cite{weightedalpha}.
An alpha complex is topologically equivalent to the union of $r$-balls,
while an weighted alpha complex is topologically equivalent to the union
of $\sqrt{r^2+\alpha_i}$-balls, where $\alpha_i$ depends on each point.
The weighted alpha filtration is useful to study the geometric structure
of a point cloud whose points have their own radii. For example,
for the analysis of atomic configuration,
the square of ionic radii or Van der Waals radii are used as $\alpha_i$.
\section{Optimal cycle}\label{sec:oc}
First, we discuss an optimal cycle on normal homology whose
coefficient is $\Bbbk = \Zint_2$.
Figure~\ref{fig:optcyc_one_hole}(a) shows a simplicial complex whose
1st homology vector space $H_1$ is isomorphic to $\Zint_2$.
In Fig.~\ref{fig:optcyc_one_hole}(b), (c), and (d),
$z_1$, $z_2$, and $z_3$ have same information about $H_1$. That is,
$H_1 = \left<[z_1]\right> = \left<[z_2]\right> = \left<[z_3]\right>$. However,
we intuitively consider that $z_3$ is the best to represent the hole in
Fig.~\ref{fig:optcyc_one_hole} since $z_3$ is the shortest loop in these
loops.
Since the size of a loop $z = \sum_{\sigma:1-\text{simplex}} \alpha_\sigma\sigma \in Z_1(X)$
is equal to
\begin{align*}
\#\{\sigma : 1\text{-simplex} \mid \alpha_\sigma \not = 0 \},
\end{align*}
and this is $\ell^0$ ``norm''\footnote{
For a finite dimensional $\R$- or $\mathbb{C}$- vector space
whose basis is $\{g_i\}_i$,
the $\ell^0$ norm $\|\cdot\|_0$ is defined by
$\|\sum_i \alpha_i g_i \|_0 = \# \{i \mid \alpha_i \not = 0 \}$.
Mathematically this is not a norm since it is not homogeneous, but
in information science and statistics, it is called $\ell^0$ norm.
}\footnote{
On a $\Zint_2$-vector space, any norm is not defined mathematically, but
it is natural that we call this $\ell^0$ norm.
},
we write it $\|z\|_0$.
Here, $z_3$ is the solution of the following problem:
\begin{align*}
\mbox{minimize } \|z\|_0 ,\mbox{ subject to } z\sim z_1.
\end{align*}
The minimizing $z$ is called the \textit{optimal cycle} for $z_1$.
From the definition of homology, we can rewrite the problem as follows:
\begin{equation}
\label{eq:optcyc_one_hole}
\begin{aligned}
\mbox{minimize } &\|z\|_0, \mbox{ subject to:} \\
z &= z_1 + \partial w, \\
w &\in C_2(X).
\end{aligned}
\end{equation}
Now we complete the formalization of the optimal cycle
on a simplicial complex with one hole.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\hsize]{optcyc_two_holes.pdf}
\caption{A simplicial complex with two holes.}
\label{fig:optcyc_two_hole}
\end{figure}
How about the case if a complex has two or more holes?
We consider the example in Fig.~\ref{fig:optcyc_two_hole}.
From $z_1$ and $z_2$, we try to find $z_1'$ and $z_2'$ using a similar
formalization. If we apply the optimization \eqref{eq:optcyc_one_hole}
to each $z_1$ and $z_2$, $z_1''$ and $z_2'$ are found. How can we
find $z_1'$ from $z_1$ and $z_2$?
The problem is a hole represented by $z_2'$, therefore we ``fill''
that hole and solve the minimization problem. Mathematically,
filling a hole corresponds to considering $Z_1(X)/(B_1(X) \oplus \left<z_2'\right>)$ instead
of $Z_1(X)/B_1(X)$ and
the following optimization problem gives us the required loop $z_1'$.
\begin{align*}
\mbox{minimize } &\|z\|_0, \mbox{ subject to:} \\
z & = z_1 + \partial w + k z_2, \\
w &\in C_2(X), \\
k & \in \Zint_2.
\end{align*}
When you have a complex that has many holes, you can apply the idea
repeatedly to find all optimal cycles. The idea of optimal cycles
obviously applied $q$th homology for any $q$.
\subsection{How to compute an optimal cycle}\label{subsec:fast-computation}
Finding a basis of a homology vector space is not a difficult problem for a computer.
We prepare a matrix representation of the boundary operator and
apply matrix reduction algorithm. Please read \cite{comphom} for the detailed algorithm.
Therefore the problem is how to solve the above minimizing problem.
In general, solving a optimization problem on a $\Zint_2$ linear space is a
difficult problem. The problem is a kind of
combinatorial optimization problems. They are well studied but it is
well known that such a problem is sometimes hard to solve on a computer.
One approach is using linear programming, used in \cite{sensor-l0-l1}.
Since optimization problem on $\Zint_2$ is
hard, we use $\R$ as a coefficient. For $\R$ coefficient, $\ell^0$ norm also
means the size of loop and $\ell^0$ optimization is natural for our purpose.
However, $\ell^0$ optimization is also a difficult problem. Therefore we replace
$\ell^0$ norm to $\ell^1$ norm. It is well known in the fields of sparse sensing and
machine learning that
$\ell^1$ optimization gives a good approximation of $\ell^0$
optimization.
That is, we solve the following optimization problem
instead of \eqref{eq:optcyc_one_hole}.
\begin{equation}
\label{eq:optcyc_one_hole-l1}
\begin{aligned}
\mbox{minimize } &\|z\|_1, \mbox{ subject to:} \\
z &= z_1 + \partial w, \\
w &\in C_2(X; \R).
\end{aligned}
\end{equation}
This is called a linear programming and we can solve the problem very efficiently
by good solvers such as cplex\footnote{\url{https://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/}} and Clp\footnote{\url{https://projects.coin-or.org/Clp}}.
Another approach is using integer programming, used in \cite{optimal-Day,Escolar2016}.
$\ell^1$ norm optimization gives a good approximation, but maybe the solution is not
exact. However, if all coefficients are restricted into $0$ or $\pm 1$
in the optimization problem \eqref{eq:optcyc_one_hole-l1},
the $\ell^0$ norm and $\ell^1$ norm is identical, and it gives a better solution.
This restriction on the coefficients has another advantage that
we can understand the optimal solution in more intuitive way.
Such an optimization problem is
called integer programming. Integer programming is much slower than linear programming,
but
some good solvers such as cplex and Clp are available for integer programming.
\subsection{Optimal cycle for a filtration}
Now, we explain optimal cycles on a filtration to
analyze persistent homology shown in \cite{Escolar2016}.
We start from the example Fig.~\ref{fig:optcyc_filtration}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize]{optcyc_filtration.pdf}
\caption{A filtration example for optimal cycles.}
\label{fig:optcyc_filtration}
\end{figure}
In the filtration, a hole $[z_1]$ appears at $X_2$ and disappear at $X_3$,
another hole $[z_2]$ appears at $X_4$ and $[z_3]$ appears at $X_5$.
The 1st PD of the filtration is $\{(2,3), (4,\infty), (5, \infty)\}$.
The persistence cycles
$z_1, z_2, z_3$, are computable by the algorithm of persistent homology and
we want to find $z_3'$ or $z_3''$ to analyze the hole corresponding to
the birth-death pair $(5, \infty)$.
The hole $[z_1]$ has been already dead at $X_5$ and $[z_2]$ remains alive at $X_5$,
so we can find $z_3'$ or $z_3''$ to solve the following optimization problem:
\begin{align*}
\mbox{minimize } & \|z\|_0 \mbox{ subject to: } \\
z &= z_3 + \partial w + k z_2, \\
w & \in C_1(X_5), \\
k & \in \Bbbk.
\end{align*}
In this case, $z_3''$ is chosen because $\|z_3'\|_0 > \|z_3''\|_0$.
By generalizing the idea, we show
Algorithm~\ref{alg:optcyc} to find optimal cycles for a filtration $\X$\footnote{
In fact, in \cite{Escolar2016}, two slightly different algorithms are shown,
and this algorithm is one of them.
}. Of course, to solve the optimization problem in Algorithm~\ref{alg:optcyc},
we can use the computation techniques shown in Section~\ref{subsec:fast-computation}.
\begin{algorithm}[ht]
\caption{Computation of optimal cycles on a filtration}\label{alg:optcyc}
\begin{algorithmic}
\State Compute $D_q(\X)$ and
persistence cycles $z_1, \ldots, z_n$
\State Choose $(b_i, d_i) \in D_q(\X)$ by a user
\State Solve the following optimization problem
\begin{align*}
\mbox{minimize } &\|z\|_1, \mbox{ subject to:} \\
z &= z_i + \partial w + \sum_{j \in T_i} \alpha_j z_j, \\
w & \in C_q(X_{b_i}), \\
\alpha_j & \in \Bbbk, \\
\text{where } T_i& = \{j \mid b_j < b_i < d_j\}.
\end{align*}
\end{algorithmic}
\end{algorithm}
\section{Volume optimal cycle}\label{sec:voc}
In this section, we propose volume optimal cycles, a new tool to
characterize generators appearing in persistent homology.
In this section, we will show the generalized version of volume optimal cycles and
the computation algorithm.
The limited version of volume optimal cycles shown in \cite{voc} will be explained
in the next section.
We assume Condition~\ref{cond:ph} and consider the filtration
$\X: \emptyset = X_0 \subset \cdots \subset X_K = X$.
A \textit{persistent volume} for $(b_i, d_i) \in D_q(\X)$
is defined as follows.
\begin{definition}
$z \in C_{q+1}(X)$ is a persistent volume for $(b_i, d_i) \in D_q(\X)$
if $z$ satisfies the following conditions:
\begin{align}
z &= \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \alpha_k \sigma_k,
\label{eq:vc-1}\\
\tau^*(\partial z) &= 0 \mbox{ for all } \tau \in \mathcal{F}_{q}, \label{eq:vc-2}\\
\sigma_{b_i}^*(\partial z) &\not = 0, \label{eq:vc-3}
\end{align}
where $\mathcal{F}_{q} = \{ \sigma_k : q\textup{-simplex} \mid b_i < k < d_i \}$,
$\{ \alpha_k \in \Bbbk \}_{\sigma_k \in \mathcal{F}_{q+1}}$, and $\sigma_k^*$ is the
dual basis of cochain $C^q(X)$, i.e. $\sigma_k^*$ is the linear map on $C_q(X)$
satisfying $\sigma_k^*(\sigma_j) = \delta_{kj}$ for any $\sigma_k, \sigma_j$: $q$-simplex.
\end{definition}
Note that the persistent volume is defined only if the death time is finite.
The \textit{volume optimal cycle} for $(b_i, d_i)$
and the \textit{optimal volume} for the pair are defined as follows.
\begin{definition}\label{defn:voc}
$\partial \hat{z}$ is the volume optimal cycle and
$\hat{z}$ is the optimal volume for $(b_i, d_i) \in D_q(\X)$
if $\hat{z}$ is the solution
of the following optimization problem.
\begin{center}
minimize $\|z\|_0$, subject to \eqref{eq:vc-1}, \eqref{eq:vc-2}, and \eqref{eq:vc-3}.
\end{center}
\end{definition}
The following theorem ensures that the optimization problem
of the volume optimal cycle always has a solution.
\begin{theorem}\label{thm:existence_voc}
There is always a persistent volume of any $(b_i, d_i) \in D_q(\X)$.
\end{theorem}
The following theorem ensures that the volume optimal cycle
is good to represent the homology generator corresponding to $(b_i, d_i)$.
\begin{theorem}\label{thm:good_voc}
Let $\{x_j \mid j=1, \ldots, p\}$ be all persistence cycles for $D_q(\X)$.
If $z_i$ is a persistent volume of $(b_i, d_i) \in D_q(\X)$,
$\{x_j \mid j\not = i\} \cup \{\partial z_i\}$ are also
persistence cycles for $D_q(\X)$.
\end{theorem}
Intuitively say, a homology generator is dead by filling the internal volume of
a ring, a cavity, etc., and a persistent volume is such an internal volume.
The volume optimal cycle
minimize the internal volume instead of the size of the cycle.
\begin{proof}[Proof of Theorem \ref{thm:existence_voc}]
Let $z_i$ be a persistence cycle satisfying (\ref{eq:birth_pre}-\ref{eq:death_post}).
Since
\begin{align*}
z_i \in B_q(X_{d_i}) \backslash B_q(X_{d_i-1}),
\end{align*}
we can write $z_i$ as follows.
\begin{equation}
\begin{aligned}
z_i &= \partial (w_0 + w_1), \\
w_0 &= \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \alpha_k \sigma_k,\\
w_1 &= \sum_{\sigma_k \in \mathcal{G}_{q+1}} \alpha_k \sigma_k,
\end{aligned}\label{eq:phbase_decomp}
\end{equation}
where $\mathcal{G}_{q+1} = \{\sigma_k: (q+1)\textrm{-simplex} \mid k < b_i\}$.
Note that the coefficient of $\sigma_{d_i}$ in $w_0$ can be normalized as in
\eqref{eq:phbase_decomp}.
Now we prove that $w_0$ is a persistent volume.
From $z_i \in Z_q(X_{b_i})$ and $\partial w_1 \in C_q(X_{b_i-1})$, we have
$\partial w_0 = z_i - \partial w_1 \in C_q(X_{b_i})$ and this means that
$\tau^*(\partial w_0) = 0$ for all $\tau \in \mathcal{F}_q$.
From $\partial w_1 \in C_q(X_{b_i-1})$, we have $\sigma_{b_i}^*(\partial w_1) = 0$
and therefore $\sigma_{b_i}^*(\partial w_0) = \sigma_{b_i}^*(z_i)$, and the right hand side
is not zero since
$z_i \in Z_q(X_{b_i}) \backslash Z_q(X_{b_i-1}) \subset C_q(X_{b_i})\backslash C_q(X_{b_i-1})$.
Therefore
$w_0$ satisfies all conditions (\ref{eq:vc-1}-\ref{eq:vc-3}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:good_voc}]
We prove the following arguments.
The theorem follows from these arguments.
\begin{align*}
\partial z_i &\in Z_q(X_{b_i}) \backslash Z_q(X_{b_i-1}), \\
\partial z_i &\in B_q(X_{d_i}) \backslash B_q(X_{d_i-1}).
\end{align*}
The condition \eqref{eq:vc-2},
$\tau^*(\partial z_i) = 0 \mbox{ for all } \tau \in \mathcal{F}_{q} $,
means $\partial z_i \in Z_q(X_{b_i})$.
The condition \eqref{eq:vc-3},
$\sigma_{b_i}^*(\partial z_i) \not = 0$, means
$\partial z_i \not \in Z_q(X_{b_i-1})$.
Since $\partial z_i = \partial \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \alpha_k \partial \sigma_k,$
and $B_q(X_{d_i}) = B_q(X_{d_i - 1}) \oplus \left< \partial \sigma_{d_i} \right>$,
we have $\partial z_i \in B_q(X_{d_i}) \backslash B_q(X_{d_i-1})$ and this finishes the
proof.
\end{proof}
\subsection{Algorithm for volume optimal cycles}
To compute the volume optimal cycles, we can apply the same strategies as
optimal cycles. Using linear programming with $\R$ coefficient and
$\ell^1$ norm is efficient and
gives sufficiently good results. Using integer programming is slower, but it gives
better results.
Now we remark the condition \eqref{eq:vc-3}. In fact it is impossible
to handle this condition by linear/integer programming directly.
We need to replace this condition
to $|\sigma_{b_i}^*(\partial z)| \geq \epsilon$ for sufficiently small $\epsilon > 0$
and we need to solve the optimization problem twice
for $\sigma_{b_i}^*(\partial z) \geq \epsilon$ and
$\sigma_{b_i}^*(\partial z) \leq -\epsilon$. However, as mentioned later,
we can often remove the constraint \eqref{eq:vc-3} to solve the problem and
this fact is useful for faster computation.
We can also apply the following heuristic performance improvement technique
to the algorithm for an alpha filtration by using the locality of
an optimal volume.
The simplices which contained in the optimal volume for $(b_i, d_i)$,
are contained
in a neighborhood of $\sigma_{d_i}$. Therefore we take a parameter $r > 0$, and
we use
$\mathcal{F}_q^{(r)} = \{\sigma \in \mathcal{F}_q \mid \sigma \subset B_r(\sigma_{d_i}) \}$
instead of $\mathcal{F}_q$ to reduce the size of
the optimization problem,
where $B_r(\sigma_{d_i})$ is the ball of radius $r$ whose center is the
centroid of $\sigma_{d_i}$.
Obviously, we cannot find a solution with a too small $r$.
In Algorithm~\ref{alg:volopt}, $r$ is properly chosen by a user but
the computation software
can automatically increase $r$ when the optimization problem cannot find
a solution.
We also use another heuristic for faster computation.
To treat the constraint \eqref{eq:vc-3}, we need to apply linear programming twice
for positive case and negative case.
In many examples, the optimized solution automatically satisfies \eqref{eq:vc-3}
even if we remove the constrain.
There is an example in which the corner-cutting does not work (shown in
\ref{subsec:properties-voc}), but it works well in many cases.
One way is that we try to solve the linear programming without \eqref{eq:vc-3} and
check the \eqref{eq:vc-3}, and if \eqref{eq:vc-3} is satisfied, output the solution.
Otherwise, we solve the linear programming twice with \eqref{eq:vc-3}.
The algorithm to compute a volume optimal cycle for an alpha filtration is
Algorithm~\ref{alg:volopt}.
\begin{algorithm}[h!]
\caption{Algorithm for a volume optimal cycle}\label{alg:volopt}
\begin{algorithmic}
\Procedure{Volume-Optimal-Cycle}{$\X, r$}
\State Compute the persistence diagram $D_q(\X)$
\State Choose a birth-death pair $(b_i, d_i) \in D_q(\X)$ by a user
\State Solve the following optimization problem:
\begin{align*}
\mbox{minimize } &\|z\|_1, \mbox{ subject to:}\\
z &= \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}^{(r)}} \alpha_k \sigma_k, \\
\tau^*(\partial z) &= 0 \mbox{ for all } \tau \in \mathcal{F}_{q}^{(r)}. \\
\end{align*}
\If{we find the optimal solution $\hat{z}$}
\If{$\sigma_{b_i}^*(\partial \hat{z}) \not = 0$}
\State \Return $\hat{z}$ and $\partial \hat{z}$
\Else
\State Retry optimization twice with the additional constrain:
\begin{align*}
\sigma_{b_i}^*(\partial z) \geq \epsilon \text{ or }
\sigma_{b_i}^*(\partial z) \leq -\epsilon
\end{align*}
\EndIf
\Else
\State \Return the error message to the user to choose larger $r$.
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
If your filtration is not an alpha filtration, possibly
you cannot use the locality technique. However, in that case,
the core part of the algorithm works fine and you can use the algorithm.
\subsection{Some properties about volume optimal cycles}
\label{subsec:properties-voc}
In this subsection, we remark some properties about volume optimal cycles.
First, the volume optimal cycle for a birth-death pair
is not unique. Figure~\ref{fig:multiple-voc} shows such an example.
In this example, $D_1 = \{(1, 5), (3, 4), (2, 6)\}$ and
both (b) and (c) is the optimal volumes of the birth-death pair $(2, 6)$.
In this filtration, any weighted sum of (b) and (c) with weight $\lambda$
and $1-\lambda$ ($0 \leq \lambda \leq 1$)
in the sense of chain complex is the volume optimal cycle of $(2, 6)$
if we use $\R$ as a coefficient and $\ell^1$ norm.
However, standard linear programing algorithms
choose an extremal point solution, hence the algorithms choose either $\lambda=0$ or
$\lambda=1$ and our algorithm outputs either (b) or (c).
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize]{voc-not-unique.pdf}
\caption{An example of non-unique volume optimal cycles.}
\label{fig:multiple-voc}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\hsize]{voc-wrong.pdf}
\caption{An example of the failure of the computation of the volume optimal cycle
if the constrain \eqref{eq:vc-3} is removed.}
\label{fig:voc-failure}
\end{figure}
Second, by the example in Fig~\ref{fig:voc-failure}, we show
that the optimization problem for the volume optimal cycle may
give a wrong solution if the constrain \eqref{eq:vc-3} is removed.
In this example, $(b_1, d_1), (b_2, d_2), (b_3, d_3)$ are birth-death pairs
in the 1st PD, and the volume optimal cycle for $(b_1, d_1)$ is ($\alpha$) in
Fig.~\ref{fig:voc-failure}, but the algorithm gives ($\beta$) if
the constrain \eqref{eq:vc-3} is removed.
\section{Volume optimal cycle on $(n-1)$-th persistent homology}\label{sec:vochd}
In this section, we consider a triangulation of a convex set in $\R^n$ and
its $(n-1)$-th persistent homology. More precisely, we assume the following
conditions.
\begin{cond}\label{cond:rn}
A simplicial complex $X$ in $\R^n$
satisfies the following conditions.
\begin{itemize}
\item Any $k$-simplex $(k<n)$ in $X$ is a face of an $n$-simplex
\item $|X|$ is convex
\end{itemize}
\end{cond}
For example, an alpha filtration satisfies the above conditions
if the point cloud has more than $n$ points and
satisfies the general position condition. In addition, we assume
Condition~\ref{cond:ph} to simplify the statements of results and algorithms.
The thesis \cite{voc} pointed out that
$(n-1)$-th persistent homology is
isomorphic to 0th persistent cohomology of the dual filtration by the Alexander duality
under the assumption.
By using this fact, the thesis defined volume optimal cycles under different formalization
from ours. The thesis defined a volume optimal cycle as an output of
Algorithm~\ref{alg:volopt-hd-compute}.
In fact, the two definitions of volume optimal cycles are equivalent
on $(n-1)$-th persistent homology.
0th persistent cohomology is deeply related to the connected components,
and we can compute the volume optimal cycle by linear computation cost.
The thesis also pointed out that $(n-1)$-th persistent homology has a tree structure called
persistence trees (or PH trees).
In this section, we always use $\Zint_2$ as a coefficient of homology
since using $\Zint_2$ makes the problem easier.
The following theorems hold.
\begin{theorem}\label{thm:vochd-unique}
The optimal volume for $(b_i, d_i) \in D_{n-1}(\X)$ is uniquely determined.
\end{theorem}
\begin{theorem}\label{thm:vochd-tree}
If $z_i$ and $z_j$ are the optimal volumes for two different birth-death
pairs
$(b_i, d_i)$ and
$(b_j, d_j)$ in $D_{n-1}(\X)$, one of the followings holds:
\begin{itemize}
\item $z_i \cap z_j = \emptyset$,
\item $z_i \subset z_j$,
\item $z_i \supset z_j$.
\end{itemize}
Note that we can naturally regard any
$z = \sum_{\sigma: n\text{-simplex}} k_{\sigma} \sigma \in C_{n}(X)$ as a subset of $n$-simplices of $X$,
$\{\sigma : n\text{-simplex} \mid k_{\sigma} \not = 0\}$,
since we use $\Zint_2$ as a homology coefficient.
\end{theorem}
From Theorem~\ref{thm:vochd-tree}, we know that
$D_{n-1}(\X)$ can be regarded as a forest (i.e. a set of trees)
by the inclusion relation. The trees are called \textit{persistence trees}.
We can compute all optimal volumes and persistence trees on $D_{n-1}(\X)$
by the merge tree algorithm (Algorithm~\ref{alg:volopt-hd-compute}).
This algorithm is a modified version of the algorithm in \cite{voc}.
To describe the algorithm,
we prepare a directed graph $(V, E)$ where $V$ is a set of nodes and
$E$ is a set of edges. In the algorithm, an element of $V$ is
a $n$-cell in $X \cup \{\sigma_{\infty}\}$ and an element of $E$ is
a directed edge between two $n$-cells, where $\sigma_\infty = \R^n \backslash X$
is the $n$-cell in the one point compactification space $\R^n \cup \{\infty\} \simeq S^n$.
An edge has extra data
in $\Zint$ and we write the edge from $\sigma$ to $\tau$ with
extra data $k$ as $(\sigma \xrightarrow{k} \tau)$.
Since the graph is always a forest through the whole algorithm,
we always find a root
of a tree which contains a $n$-cell $\sigma$ in the graph $(V, E)$
by recursively following edges from $\sigma$.
We call this procedure \textproc{Root}($\sigma, V, E$).
\begin{algorithm}
\caption{Computing persistence trees by merge-tree algorithm}\label{alg:volopt-hd-compute}
\begin{algorithmic}
\Procedure{Compute-Tree}{$\X$}
\State initialize $V = \{\sigma_\infty\}$ and $E = \emptyset$
\For{$k=K,\ldots,1$}
\If{$\sigma_k$ is a $n$-simplex}
\State add $\sigma_k$ to $V$
\ElsIf{$\sigma_k$ is a $(n-1)$-simplex}
\State let $\sigma_s$ and $\sigma_t$ are two $n$-cells
whose common face is $\sigma_k$
\State $\sigma_{s'} \gets \textproc{Root}(\sigma_s, V, E)$
\State $\sigma_{t'} \gets \textproc{Root}(\sigma_t, V, E)$
\If{$s'=t'$}
\State \textbf{continue}
\ElsIf{$s'> t'$}
\State Add $(\sigma_{t'} \xrightarrow{k} \sigma_{s'})$ to $E$
\Else
\State Add $(\sigma_{s'} \xrightarrow{k} \sigma_{t'})$ to $E$
\EndIf
\EndIf
\EndFor
\Return $(V, E)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
The following theorem gives us the interpretation of
the result of the algorithm to the persistence information.
\begin{theorem}\label{thm:vochd-alg}
Let $(V, E)$ be a result of Algorithm~\ref{alg:volopt-hd-compute}. Then
the followings hold.
\begin{enumerate}[(i)]
\item $D_{n-1}(\X) = \{(b, d) \mid (\sigma_d \xrightarrow{b} \sigma_s) \in E\}$
\item The optimal volume for $(b, d)$ is
all descendant nodes of $\sigma_d$ in $(E, V)$
\item The persistence trees is computable from $(E, V)$. That is,
$(b_i, d_i)$ is a child of $(b_j, d_j)$ if and only if there are edges
$\sigma_{d_i} \xrightarrow{b_i} \sigma_{d_j} \xrightarrow{b_j} \sigma_{s}$.
\end{enumerate}
\end{theorem}
The theorems in this section can be proven from the following facts:
\begin{itemize}
\item From Alexander duality, for a simplicial complex $X$ in $\R^n$,
\begin{align*}
H_q(X) \simeq H^{n-q-1}((\R^n\backslash X)\cup\{\infty\}),
\end{align*}
holds.
\begin{itemize}
\item $\infty$ is required for one point compactification of $\R^n$.
\item More precisely, we use the dual decomposition of $X$.
\end{itemize}
\item By applying above Alexander duality to a filtration,
$(n-1)$-th persistent homology is isomorphic to $0$-th persistent cohomology
of the dual filtration.
\item On a cell complex $\bar{X}$, a basis of $0$-th cohomological vector space is
given by
\begin{align*}
\{ \sum_{\sigma \in C} \sigma^* &\mid C \in \textrm{cc}(\bar{X})\},
\end{align*}
where $\textrm{cc}(\bar{X})$
is the connected component decomposition of 0-cells in $\bar{X}$.
\item Merge-tree algorithm traces the change of connectivity in the filtration, and
it gives the structure of 0-the persistent cohomology.
\end{itemize}
We prove the theorems in Appendix~\ref{sec:pfvochd}.
\subsection{Computation cost for merge-tree algorithm}
\label{sec:faster}
In the algorithm, we need to find the root from its descendant node.
The naive way to find the root is following the graph step by step
to the ancestors. In the worst case, the time complexity
of the naive way
is $O(N)$ where $N$ is the number of
of $n$-simplices, and total time complexity of the algorithm becomes $O(N^2)$.
The union-find algorithm~\cite{unionfind} is used
for a similar data structure, and we can apply the idea of union-find algorithm.
By adding a shortcut path to the root in a similar way as the union-find algorithm,
the amortized time complexity is improved to almost constant time\footnote{
More precisely, the amortized time complexity is bounded by the inverse of
Ackermann function and it is less than 5 if
the data size is less than $2^{2^{2^{2^{16}}}}$. Therefore we can regard the time
complexity as constant.
}.
Using the technique, the total time complexity of the Algorithm~\ref{alg:volopt-hd-compute}
is $O(N)$.
\section{Comparison between volume optimal cycles
and optimal cycles}\label{sec:compare}
In this section, we compare volume optimal cycles and optimal cycles.
In fact, optimal cycles and volume optimal cycles are identical in many cases.
However, since we can use optimal volumes in addition to volume optimal cycles,
we have more information than optimal cycles.
One of the most prominent advantage of volume optimal cycles is children birth-death pairs,
explained below.
\subsection{Children birth-death pairs}
In the above section, we show that there is a tree structure
on an $(n-1)$-th persistence diagram computed from
a triangulation of a convex set in $\R^n$. Unfortunately,
such a tree structure does not exist in a general case.
However, in the research of amorphous solids by persistent homology\cite{Hiraoka28062016},
a hierarchical structure of rings in $\R^3$ is effectively used, and
it will be helpful if we can find such a structure on a computer.
In \cite{Hiraoka28062016}, the hierarchical structure
was found by computing all optimal cycles and
searching multiple optimal cycles which have common vertices.
However, computing all optimal cycles or all volume optimal cycles
is often expensive as shown in Section \ref{subsec:performance} and
we require a cheaper method. The optimal volume is available for that purpose.
When the optimal volume for a birth-death pair $(b_i, d_i)$ is
$\hat{z} = \sigma_{d_i} + \sum_{\sigma_k \in \mathcal{F}_{q+1}} \hat{\alpha}_k \sigma_k$,
the \textit{children birth-death pairs} of $(b_i, d_i)$ is defined as follows:
\begin{align*}
\{(b_j, d_j) \in D_q(\X) \mid \sigma_{d_j} \in \mathcal{F}_{q+1},
\hat{\alpha}_{d_j} \not = 0 \}.
\end{align*}
This is easily computable from a optimal volume with low computation cost.
Now we remark that if we consider $(n-1)$-th persistent homology in $\R^n$,
the children birth-death pairs of $(b_i, d_i) \in D_{n-1}(\X)$ is identical to
all descendants of $(b_i, d_i)$ in the tree structure. This fact is known from
Theorem~\ref{thm:vochd-tree}. This fact suggests that we can use
children birth-death pairs as a good substitute for the tree structure appearing
on $D_{n-1}(\X)$ in $\R^n$. The ability of children birth-death pairs is shown in
Section \ref{sec:example-silica}, the example of amorphous silica.
\subsection{Some examples in which volume optimal cycles and
optimal cycles are different}
We show some differences between optimal cycles and volume optimal cycles
on a filtration.
In Fig~\ref{fig:oc-voc-diff-1},
the 1st PD of this filtration is $\{(2, 5), (3, 4)\}$.
The optimal cycle of $(3, 4)$ is $z_1$ since
$\|z_1\|_1 < \|z_2\|_1$ but the volume optimal cycle is $z_2$.
In this example, $z_2$ is better than $z_1$ to represent the
birth-death pair $(3, 4)$.
The example is deeply related to Theorem~\ref{thm:good_voc}.
Such a theorem does not hold for optimal cycles and it means that an optimal cycle may
give misleading information about a birth-death pair.
This is one advantage of volume optimal cycles compared to optimal cycles.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\hsize]{oc-voc-diff-1.pdf}
\caption{A filtration whose optimal cycle and volume optimal cycle are different.}
\label{fig:oc-voc-diff-1}
\end{figure}
In Fig.~\ref{fig:oc-voc-diff-2} and Fig.~\ref{fig:oc-voc-diff-3},
optimal cycles and volume optimal cycles are also different.
In Fig.~\ref{fig:oc-voc-diff-2},
the optimal cycle is $z_1$ but the volume optimal cycle $z_2$.
In Fig.~\ref{fig:oc-voc-diff-3},
the optimal cycle for $(3, 4)$ is $z_1$
but the volume optimal cycle is $z_1 + z_2$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.3\hsize]{oc-voc-diff-2.pdf}
\caption{Another filtration whose optimal cycle and volume optimal cycle are different.}
\label{fig:oc-voc-diff-2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\hsize]{oc-voc-diff-3.pdf}
\caption{Another filtration whose optimal cycle and volume optimal cycle are different.}
\label{fig:oc-voc-diff-3}
\end{figure}
In Fig~\ref{fig:no-voc}, the 1st PD is $(2, \infty)$ and we cannot define the volume optimal
cycle but can define the optimal cycle. In general, we cannot define
the volume optimal cycle for a birth-death pair with infinite death time.
If we use an alpha filtration in $\R^n$, such a problem doest not occur because
a Delaunnay triangulation is always acyclic. But if we use another type of a filtration,
we possibly cannot use volume optimal cycles.
That may be a disadvantage of volume optimal cycles if we use a filtration other than
an alpha filtration, such as a Vietoris-Rips filtration.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.3\hsize]{no-voc.pdf}
\caption{A filtration without a volume optimal cycle.}
\label{fig:no-voc}
\end{figure}
One more advantage of the volume optimal cycles is
the simplicity of the computation algorithm. For the computation of
the optimal cycles we need to keep track of all persistence cycles but
for the volume optimal cycles we need only birth-death pairs.
Some efficient algorithms implemented in phat and dipha do not keep track of
such data, hence we cannot use such softwares to compute the optimal cycles
without modification.
By contrast we can use such softwares for the computation of
the volume optimal cycles.
\section{Example}\label{sec:example}
In this section, we will show the example results of our algorithm.
In all of these examples, we use alpha or weighted alpha filtrations.
For all of these examples, optimal volumes and volume optimal cycles are computed
on a laptop PC with 1.2 GHz Intel(R) Core(TM) M-5Y71 CPU and 8GB memory on Debian 9.1.
Dipha~\cite{dipha} is used to compute PDs,
CGAL\footnote{\url{http://www.cgal.org/}} is used to compute (weighted) alpha filtrations,
and Clp~\cite{coin} is used to solve the linear programming.
Python is used to write the program and pulp\footnote{\url{https://github.com/coin-or/pulp}} is used for
the interface to Clp from python.
Paraview\footnote{\url{https://www.paraview.org/}} is used to visualize volume optimal cycles.
If you want to use the software, please contact with
us. Homcloud\footnote{\url{http://www.wpi-aimr.tohoku.ac.jp/hiraoka_labo/research-english.html}},
a data analysis software with persistent homology developed
by our laboratory, provides the algorithms shown in this paper.
Homcloud provides the easy access to the volume optimal cycles. We can visualize
the volume optimal cycle of a birth-death pair only by clicking the pair in a PD on
Homcloud's GUI.
\subsection{2-dimensional Torus}
The first example is a 2-dimensional torus in $\R^3$. 2400 points are randomly scattered on
the torus and PDs are computed. Figure~\ref{fig:pd-torus} shows the
1st and 2nd PDs. The 1st PD has two birth-death pairs
$(0.001, 0.072)$ and $(0.001, 0.453)$ and the 2nd PD has
one birth-death pair $(0.008, 0.081)$ far from the diagonal. These birth-death pairs
correspond to generators of $H_1(\mathbb{T}^2) \simeq \Bbbk^2$ and
$H_2(\mathbb{T}^2) \simeq \Bbbk$.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\hsize]{torus-pd1.png}
\includegraphics[width=0.4\hsize]{torus-pd2.png}
\caption{The 1st and 2nd PDs of the point cloud on a torus.}
\label{fig:pd-torus}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.3\hsize]{torus-w-1.png}
\includegraphics[width=0.3\hsize]{torus-w-2.png}
\includegraphics[width=0.3\hsize]{torus-w-3.png}
\caption{Volume optimal cycles for $(0.001, 0.072)$ and $(0.001, 0.453)$ in $D_1$ and
$(0.008, 0.081)$ in $D_2$
on the torus point cloud.}
\label{fig:torus-voc}
\end{figure}
Figure~\ref{fig:torus-voc} shows the volume optimal cycles of these three birth-death pairs
using Algorithm~\ref{alg:volopt}.
Blue lines show volume optimal cycles, red lines show optimal volumes,
black lines show $\sigma_d$ for each birth death pair $(b, d)$ (we call this simplex the \textit{death simplex}). Black dots show the point cloud. By the figure, we understand
how homology generators appear and disappear in the filtration of the
torus point cloud.
The computation times are 25sec, 33sec, and 7sec on our laptop PC.
By using Algorithm~\ref{alg:volopt-hd-compute}, we can also compute volume optimal cycles
in $D_2$. In this example, the computation time by
Algorithm~\ref{alg:volopt-hd-compute} is about 2sec. This is much faster than
Algorithm~\ref{alg:volopt} even if Algorithm~\ref{alg:volopt-hd-compute} computes
\emph{all} volume optimal cycles.
\subsection{Amorphous silica}
\label{sec:example-silica}
In this example, we use the atomic configuration of
amorphous silica computed by molecular dynamical simulation
as a point cloud and we try to reproduce the result
in \cite{Hiraoka28062016}. In this example, we use weighted alpha
filtration whose weights are the radii of atoms. The number of atoms are
8100, 2700 silicon atoms and 5400 oxygen atoms.
Figure~\ref{fig:amorphous-silica} shows the 1st PD. This diagram
have four characteristic areas $C_P$, $C_T$, $C_O$, and $B_O$.
These areas correspond to
the typical ring structures in the amorphous silica as follows.
Amorphous silica consists of silicon atoms and oxygen atoms and
the network structure is build by covalent bonds between silicons and oxygens.
$C_P$ has rings whose atoms are \ce{$\cdots$ -Si-O-Si-O- $\cdots$ } where
\ce{-} is a covalent bond between a silicon atom and a oxygen atom.
$C_P$ has triangles consisting of \ce{O-Si-O}.
$C_O$ has triangles consisting of three oxygen atoms appearing alternately
in \ce{$\cdots$-O-Si-O-Si-O-$\cdots$}.
$B_O$ has many types of ring structures, but one typical ring is a quadrangle
consists of four oxygen atoms
appearing alternately
in \ce{$\cdots$-O-Si-O-Si-O-Si-O-$\cdots$}.
Figure~\ref{fig:voc-silica} shows the volume optimal cycles for birth-death pairs
in $C_P, C_t, C_O$ and $B_O$. In this figure oxygen (red) and silicon (blue) atoms
are also shown in addition to volume optimal cycles, optimal volumes,
and death simplices.
We can reproduce the result of \cite{Hiraoka28062016} about ring reconstruction.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.24\hsize]{c_p.png}
\includegraphics[width=0.24\hsize]{c_t.png}
\includegraphics[width=0.24\hsize]{c_o.png}
\includegraphics[width=0.24\hsize]{b_o.png}
\caption{Volume optimal cycles in amorphous silica in $C_P, C_T, C_O$, and $B_O$ (from left to right).}
\label{fig:voc-silica}
\end{figure}
We also know that the oxygen atom rounded by the green circle in this figure
is important to determine the death time. The death time of this birth-death pair is
determined by the radius of circumcircle of the black triangle (the death simplex),
hence if the oxygen atom moves away, the death time becomes larger.
The oxygen atom is contained in another \ce{$\cdots$ -Si-O-Si-O- $\cdots$}
ring structure around the volume optimal cycle (the blue ring). By the
analysis of the optimal volume, we clarify that such an interaction of covalent bond
rings determines the death times of birth-death pairs
in $C_P$. This analysis is impossible for the optimal cycles, and
the volume optimal cycles enable us to analyze persistence diagrams more deeply.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.5\hsize]{children.pdf}
\caption{Children birth-death pairs.
Red circles are children birth-death pairs of the green birth-death pair.}
\label{fig:children-bd-pairs}
\end{figure}
Figure~\ref{fig:children-bd-pairs} shows the children birth-death pairs of
the green birth-death pair. The rings corresponding to these children birth-death
pairs are subrings of the large ring corresponding to the green birth-death pair.
This computation result shows that a ring in $C_P$ has subrings in $C_T$, $C_O$,
and $B_O$. The hierarchical structure of these rings
shown in \cite{Hiraoka28062016}. We can easily find such a hierarchical structure
by using our new algorithm.
The computation time is 3 or 4 seconds for each
volume optimal cycle on the laptop PC. The computation time for amorphous silica
is much less than
that for 2-torus even if the number of points in amorphous silica is larger than
that in 2-torus. This is because the locality of
volume optimal cycles works very fine in the example of amorphous silica.
\subsection{Face centered cubic lattice with defects}
The last example uses the point cloud of face centered cubic (FCC) lattice
with defects. By this example, we show how to use the persistence trees
computed by Algorithm~\ref{alg:volopt-hd-compute}.
The point cloud is prepared by constructing perfect FCC lattice,
adding small Gaussian noise to each point,
and randomly removing points from the point cloud.
\begin{figure}[thbp]
\centering
\includegraphics[width=0.8\hsize]{fcc-pds.pdf}
\caption{(a) The 2nd PD of the perfect FCC lattice with small Gaussian noise.
(b) The 2nd PD of the lattice with defects.
}\label{fig:fcc-pd}
\end{figure}
Figure~\ref{fig:fcc-pd}(a) shows the 2nd PD of FCC lattice with small
Gaussian noise. (i) and (ii) in the figure correspond to
octahedron and tetrahedron cavities in the FCC lattice.
In materials science, these cavities are famous as octahedron sites and tetrahedron sites.
Figure~\ref{fig:fcc-pd}(b) shows the 2nd PD of the lattice with defects.
In the PD, birth-death pairs corresponding to
octahedron and tetrahedron cavities remain ((i) and (ii) in Fig~\ref{fig:fcc-pd}(b)),
but other types of birth-death pairs appear in this PD. These pairs
correspond to other types of cavities generated by removing points from the FCC lattice.
Figure~\ref{fig:fcc-tree-1}(a) shows a tree computed
by Algorithm~\ref{alg:volopt-hd-compute}. Red markers are nodes of the tree,
and lines between two markers are edges of the tree, where
upper left nodes are ancestors and lower right nodes are descendants.
The tree means that the largest cavity corresponding to most upper-left node
has sub cavities corresponding descendant nodes.
Figure~\ref{fig:fcc-tree-1}(b) shows the volume optimal cycle of
the most upper-left node, (c) shows the volume optimal cycles of pairs
in (i), and (d) shows the volume optimal cycles of pairs in (ii).
Using the algorithm,
we can study the hierarchical structures of the 2nd PH.
\begin{figure}[thbp]
\centering
\includegraphics[width=0.85\hsize]{fcc-tree-1.pdf}
\caption{A persistence tree and related volume optimal cycles.
(a) The persistence tree whose root is $(0.68, 1.98)$.
(b) The volume optimal cycle of the root pair.
(c) The volume optimal cycles of birth-death pairs in (i) which are descendants of
the root pair.
(d) The volume optimal cycles of birth-death pairs in (ii) which are descendants of
the root pair.
}
\label{fig:fcc-tree-1}
\end{figure}
\subsection{Computation performance comparison with optimal cycles}
\label{subsec:performance}
We compare the computation performance between optimal cycles and volume optimal cycles.
We use OptiPers for the computation of optimal cycles for persistent homology,
which is provided by Dr. Escolar, one of the authors of \cite{Escolar2016}.
OptiPers is written in C++ and our software is mainly written in python,
and python is much slower than C++, so
the comparison is not fair, but suggestive for the readers.
We use two test data.
One test data is the atomic configuration of amorphous silica used in the above example.
The number of points is 8100.
Another data is the partial point cloud of the amorphous silica.
The number of points is 881. We call these data the large data and the small data.
Table~\ref{tab:performance} shows the computation time of
optimal cycles/volume optimal cycles for all birth-death pairs in the 1st PD
by OptiPers/Homcloud.
\begin{table}[thbp]
\centering
\begin{tabular}{c|cc}
& optimal cycles (OptiPers) & volume optimal cycles (Homcloud) \\ \hline
the small data & 1min 17sec & 3min 9sec \\
the large data & 5hour 46min & 4hour 13min\\
\end{tabular}
\caption{Computation time of optimal cycles and volume optimal cycles on the large/small data.}
\label{tab:performance}
\end{table}
For the small data, OptiPers is faster than Homcloud, but to the contrary,
for the large data, Homcloud is faster than OptiPers. This is because
the performance improvement technique using the locality of the optimal volume
works fine for the large data, but for the small data the technique is not
so effective and the overhead cost using python is dominant for Homcloud.
This benchmark shows that the volume optimal cycles have an advantage about
the computation time when an input point cloud is large.
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose the idea of volume optimal cycles to identify
good geometric realizations of homology generators appearing in persistent homology.
Optimal cycles are proposed for that purpose in \cite{Escolar2016},
but our method is faster for large data and
gives better information. Especially, we can reasonably compute
children birth-death pairs
only from a volume optimal cycle. Volume optimal cycles
are already proposed under the limitation of dimension in \cite{voc},
and this paper generalize the idea.
Our idea and algorithm are widely applicable to and
useful for
the analysis of point clouds in $\R^n$ by using the (weighted) alpha filtrations.
Our method gives us intuitive understanding of PDs. In~\cite{PDML}, such inverse analysis
from a PD to its original data is effectively used to study many geometric data
with machine learning on PDs and our method is useful to
the combination of persistent homology and machine learning.
In this paper, we only treat simplicial complex, but our method is
also applicable to a cell filtration and a cubical filtration.
Our algorithms will be useful to study sublevel or superlevel filtrations
given by 2D/3D digital images.
\appendix
\section{Proofs of Section \ref{sec:vochd}}\label{sec:pfvochd}
The theorems shown in this section are a kind of folklore theorems.
Some researchers about persistent homology probably know the fact that the merge-tree
algorithm gives a 0th PD, and the algorithm
is available to compute an $(n-1)$-th PD using Alexander duality,
but we cannot find
the literature for the complete proof.
\cite{voc} stated that the algorithm also gives the tree structure on an
$(n-1)$-th PD, but the thesis does not have the complete proof.
Therefore we will show the proofs here.
Alexander duality says that for any good topological subspace $X$ of $S^n$,
the $(k-1)$-th homology of $X$ and $(n-k)$-th cohomology of $S^n\backslash X$ have
the same information. In this section, we show Alexander duality theorem
on persistent homology.
In this section, we always use $\Zint_2$ as a coefficient of homology and cohomology.
\subsection{Persistent cohomology}
The persistent cohomology is defined on a decreasing sequence
$\Y : Y_0 \supset \cdots \supset Y_K$ of topological spaces.
The cohomology vector spaces and the linear maps induced from
inclusion maps define the sequence
\begin{align*}
H^q(Y_0) \to \cdots \to H^q(Y_K),
\end{align*}
and this family of maps is called persistent cohomology $H^q(\mathbb{Y})$.
The decomposition theorem also holds for persistent cohomology in the same way as
persistent homology and we define the $q$th cohomologous persistence
diagram $D^q(\mathbb{Y})$
using the decomposition.
\subsection{Alexander duality}\label{sec:alex}
Before explaining
Alexander duality, we show the following proposition about the dual decomposition.
\begin{prop}\label{prop:dual}
For any oriented closed $n$-manifold $S$ and its simplicial decomposition $K$,
there is a decomposition of $M$, $\bar{K}$, satisfying the followings:
\begin{enumerate}
\item $\bar{K}$ is a cell complex of $M$.
\item There is a one-to-one correspondence between $K$ and $\bar{K}$.
For $\sigma \in K$, we write the corresponding cell in $\bar{K}$ as $\bsigma$.
\item $\dim \sigma = n - \dim \bsigma$ for any $\sigma \in K$.
\item If $X \subset K$ a subcomplex of $K$,
\begin{align*}
\bar{X} = \{\bsigma \mid \sigma \not\in X\}
\end{align*}
is a subcomplex of $\bar{K}$.
\item We consider the chain complex of $K$ and $\bar{K}$,
let $\partial$ and $\bar{\partial}$ be boundary operators on
those chain complexes, and let $B$ and $\bar{B}$ be matrix representations of
$\partial$ and $\bar{\partial}$, i.e.
$\partial \sigma_i = \sum_j B_{ji} \sigma_j$ and
$\bar{\partial} \bsigma_i = \sum_j \bar{B}_{ji} \bsigma_j$.
Then $\bar{B}$ is the transpose of $B$.
\end{enumerate}
\end{prop}
This decomposition $\bar{K}$ is called the \textit{dual decomposition} of $K$.
One example of the dual decomposition is a Voronoi decomposition
with respect to a Delaunnay triangulation.
Using the dual decomposition, we can define the map $\theta$ from
$C_k(K)$ to $C^{n-k}(\bar{K})$
for $k=0,\ldots,n$
as the linear extension of $\sigma_i \mapsto \bsigma_i^*$, where
$\{\sigma_i\}_i$ are $k$-simplices of $K$,
$\{\bsigma_i\}_i$ are corresponding $(n-k)$-cells of $\bar{K}$,
and
$\{\bsigma_i^* \in C^{n-k}(\bar{K})\}_i$ is the dual basis of
$\{\bsigma_i\}_i$.
The map $\theta$ satisfies the equation
\begin{align}
\theta \circ \partial = \delta \circ \theta, \label{eq:comm_poincare}
\end{align}
where $\delta$ is the coboundary
operator on $C^*(\bar{K})$ from Proposition \ref{prop:dual}. The map $\theta$ induces the
isomorphism $H_k(K) \simeq H^{n-k}(\bar{K})$, and the isomorphism is called
Poincar\'{e} duality.
Using the dual decomposition, we show Alexander duality theorem.
\begin{theorem}
For an $n$-sphere $S^n$, its simplicial decomposition $K$, and
a subcomplex of $X \subset K$, we take a dual decomposition $\bar{K}$ and
a subcomplex of $\bar{X}$ as in Proposition \ref{prop:dual}. Then,
\begin{align}
\tilde{H}_{k-1}(X) \simeq \tilde{H}^{n-k}(\bar{X}) \label{eq:alex_iso}
\end{align}
holds for any $k=1,\ldots,n$, where $\tilde{H}$ is the
reduced (co)-homology.
\end{theorem}
To apply the duality theorem to persistent homology,
we investigate that isomorphism in detail.
First, we consider the case of $K = X$. In this case, $\bar{X} = \emptyset$
and the homology of $X$ is the same as an $n$-sphere.
Therefore, $\tilde{H}_k(X) = 0$ for any $k=0,\ldots, n-1$
and this is isomorphic to $\tilde{H}^{n-k-1}(\emptyset) = 0$.
Next, we consider the case that $K \not = X$. In this case, there is a
$n$-simplex of $K$ which is not contained in $X$. We write the $n$-simplex
as $\omega$ and let $K_0$ be $K\backslash\{\omega\}$.
\begin{prop}
There is the following isomorphism.
\begin{align}
H_k(K, X) \simeq H^{n-k}(\bar{X}).
\label{eq:kx_barx_iso}
\end{align}
This isomorphism is induced by:
\begin{align*}
\bar{\theta} : C_k(K, X) = C_k(K)/C_k(X) &\to C^{n-k}(\bar{X}) \\
\sum_{i=s+1}^t a_i \sigma_i + C_k(X) &\mapsto \sum_{i=s+1}^t a_i \bsigma_i^*
\end{align*}
where $\{\sigma_1,\ldots,\sigma_t\}$ is all $k$-simplices of $K$ and
$\{\sigma_1,\ldots,\sigma_s\}$ is all $k$-simplices of $X$.
\end{prop}
The map $\theta$ is well-define and isomorphic since
$\{\bsigma_{s+1}, \ldots, \bsigma_{t}\}$ is equal to the set of
all $(n-k)$-simplices of $\bar{X}$. In addition,
$\delta \circ \bar{\theta} = \bar{\theta} \circ \partial$ holds
where $\partial$ is the boundary operator on $C_*(K, X)$, and
$\delta$ is the coboundary operator on $C^*(\bar{X})$
due to \eqref{eq:comm_poincare}.
Using the map $\bar{\theta}$, the isomorphism
$\bar{\theta}_* : H_k(K, X) \to H^{n-k}(\bar{X})$ is defined as follows,
\begin{align}
\left[\sum_{i=s+1}^t a_i \sigma_i + C_k(X)\right] \mapsto
\left[\sum_{i=s+1}^t a_i \bsigma_i^*\right]. \label{eq:bar_theta_star}
\end{align}
The next key is the long exact sequence on the pair $(K, X)$.
\begin{align}
\cdots \to
\tilde{H}_k(X) \to
\tilde{H}_k(K) \xrightarrow{j_*}
H_k(K, X) \xrightarrow{\partial_*}
\tilde{H}_{k-1}(X) \to \tilde{H}_{k-1}(K) \to \cdots . \label{eq:long_exact_seq}
\end{align}
The map $\partial_*$ is written as follows
\begin{align}
\partial_*([z + C_k(X)]) = [\partial z],
\label{eq:partial_star}
\end{align}
and $j_*$ is induced by the projection map from $C_k(K)$ to $C_k(K, X)$.
If $k \not = n$, since both $\tilde{H}_k(K)$ and $\tilde{H}_{k-1}(K)$ are zero,
The following map is isomorphic due to the long exact sequence \eqref{eq:long_exact_seq}.
\begin{align}
\partial_* : H_{k}(K, X) \xrightarrow{\sim} H_{k-1}(K). \label{eq:partial_star_iso}
\end{align}
By combining \eqref{eq:kx_barx_iso} and \eqref{eq:partial_star_iso},
we conclude the isomorphism \eqref{eq:kx_barx_iso} for $k \not = n$.
We can explicitly write the isomorphism from $\tilde{H}^{n-k}$
to $\tilde{H}_{k-1}$ as follows using
\eqref{eq:bar_theta_star} and \eqref{eq:partial_star}:
\begin{align}
\left[\sum_{i=s+1}^t a_i \bsigma_i^*\right] \mapsto
\left[\partial\left(\sum_{i=s+1}^t a_i \bsigma_i\right)\right].
\end{align}
When $k=n$, we need to treat the problem more carefully.
From the long exact sequence \eqref{eq:long_exact_seq}, we can show that
the following sequence is exact:
\begin{align}
\begin{array}{ccccccccc}
H_n(X)&\to&H_n(K)&\xrightarrow{j_*}&H_n(K, X)&\xrightarrow{\partial_*}&H_{n-1}(X)&\to&H_{n-1}(K) \\
\veq && \vsimeq & & & & & & \veq \\
0 && \Zint_2 & & & & & & 0 \\
\end{array}.
\end{align}
Let $\{\sigma_1, \ldots, \sigma_{t-1}, \sigma_t=\omega\}$ be
$n$-simplices of $K$ and $\{\sigma_1, \ldots, \sigma_s\}$ be
$n$-simplices of $X$. From the assumption of $X \not = K$,
$s < t$ holds.
It is easy to show that $\tau = \sigma_1 + \cdots + \sigma_t$ is
the generator of $Z_n(K)$. From the definition of
the reduced cohomology,
\begin{align}
\tilde{H}^0(\bar{X}) = Z^0(\bar{X})/\left<\bar{\tau}\right>, \label{eq:cohomology0}
\end{align}
where
$Z^0(\bar{X}) = \ker(\delta: C^0(\bar{X}) \to C^1(\bar{X}))$ and
$\bar{\tau} = \bar{\theta}(j(\tau)) = \bsigma_{s+1}^* + \cdots + \bsigma_t^*$.
For $\Zint_2$ coefficient, the following set is the basis of $Z^0(\bar{X})$
\begin{align}
\{ \sum_{\bsigma \in C}\bsigma^* \mid C \in \textrm{cc}(\bar{X}) \},
\label{eq:z0_basis}
\end{align}
where $\textrm{cc}(\bar{X})$ is the connected component decomposition of 0-cells
in $\bar{X}$. Therefore, we can write $\tilde{H}^0(\bar{X})$ as:
\begin{align*}
\tilde{H}^0(X) = \{ [\sum_{\bsigma \in C}\bsigma^*]
\mid C \in \textrm{cc}_\omega(\bar{X}) \},
\end{align*}
where $\textrm{cc}_\omega(\bar{X}) \subset \textrm{cc}(\bar{X})$
is the set of connected components which do not contain $\bar{\omega}$.
Using the above relations, we can show $\tilde{H}^0(\bar{X}) \simeq H_{n-1}(X)$
whose isomorphism is the linear extension of the following:
\begin{equation}
\begin{aligned}
\Theta&: \tilde{H}^0(X) \to H_{n-1}(X) \\
\Theta&([\sum_{\bsigma \in C}\bsigma^*]) =
[\partial(\sum_{\bsigma \in C}\sigma)] \\
&\textrm{for all } C \in \textrm{cc}_\omega(\bar{X}).
\end{aligned}
\label{eq:ccboundary}
\end{equation}
\subsection{Alexander duality and persistent homology}
\label{subsec:alex_ph}
To apply Alexander duality to the persistent homology, we need to
consider the relation between inclusion maps and the isomorphism $\bar{\theta}_*$.
For two subcomplex $X_1 \subset X_2$ of $K$, the following diagram commutes:
\begin{equation}
\label{eq:alexph_comm}
\begin{aligned}
\begin{CD}
C_\ell(K, X_1) @>\bar{\theta}>> C^{n-\ell}(\bar{X}_1) \\
@VV{\phi}V @VV\bar{\phi}^{\vee}V \\
C_\ell(K, X_2) @>\bar{\theta}>> C^{n-\ell}(\bar{X}_2), \\
\end{CD}
\end{aligned}
\end{equation}
where $\phi$ and $\bar{\phi}^\vee$ are induced from the inclusion maps.
Note that
$X_1 \subset X_2$ induces $\bar{X}_1 \supset \bar{X}_2$ and
$\bar{\phi}^\vee$ is defined from
$C^{n-\ell}(\bar{X}_1)$ to $C^{n-\ell}(\bar{X}_2)$.
Using \eqref{eq:alexph_comm},
we have the following commutative diagram:
\begin{equation}
\label{eq:alexph_comm2}
\begin{aligned}
\begin{CD}
H_\ell(K, X_1) @>\bar{\theta_*}>> H^{n-\ell}(\bar{X}_1) \\
@VV{\phi_*}V @VV\bar{\phi}^{*}V \\
H_\ell(K, X_2) @>\bar{\theta_*}>> H^{n-\ell}(\bar{X}_2), \\
\end{CD}
\end{aligned}
\end{equation}
We also have the following commutative diagram between two long exact sequences
\begin{equation}
\label{eq:alexph_comm3}
\begin{aligned}
\begin{CD}
\cdots @>>> \tilde{H}_k(X_1) @>>> \tilde{H}_k(K) @>j_*>>
H_{k}(K, X_1) @>\partial_*>>
\tilde{H}_{k-1}(X_1) @>>> \cdots \\
@. @VV{\phi}_*V @| @VV\phi_*V @VV\phi_*V @. \\
\cdots @>>> \tilde{H}_k(X_2) @>>> \tilde{H}_k(K) @>j_*>>
H_{k}(K, X_2) @>\partial_*>>
\tilde{H}_{k-1}(X_2) @>>> \cdots . \\
\end{CD}
\end{aligned}
\end{equation}
From \eqref{eq:alexph_comm2}, \eqref{eq:alexph_comm3}, and the discussion
in Section~\ref{sec:alex}, we have the following commutative diagram:
\begin{align*}
\begin{CD}
\tilde{H}_{\ell-1}(X_2) @>\sim>> \tilde{H}^{n-\ell}(\bar{X}_2) \\
@VV\phi_*V @VV\bar{\phi}^*V \\
\tilde{H}_{\ell-1}(X_1) @>\sim>> \tilde{H}^{n-\ell}(\bar{X}_1).
\end{CD}
\end{align*}
This diagram means that the isomorphism preserves
the decomposition structure of persistent homology and hence
$\tilde{H}_{\ell-1}(\X) \simeq \tilde{H}^{n-\ell}(\bar{\X})$ holds
for $\X: X_0 \subset \cdots \subset X_K$
where $\bar{\X} : \bar{X}_0 \supset \cdots \supset \bar{X}_K$.
\subsection{Alexander duality and a triangulation in $\R^n$}
\label{sec:alex_alpha}
Here, we consider the simplicial filtration in $\R^n$ satisfying
Condition~\ref{cond:rn}. Under the condition,
we need to embed the filtration $\X$
on $\R^n$ into $S^n$ by using one point compactification. We consider a
embedding $|X| \to S^n$ and take $\sigma_{\infty}$ as $S^n\backslash |X|$.
Using the embedding,
we can regard $X \cup \{\sigma_\infty\}$ as a cell decomposition of $S^n$.
The above discussion about Alexander duality on persistent homology
works on this cell complex,
if we properly define the boundary operator and the dual decomposition.
In that case, we regard $\sigma_\infty$ as $\omega$ in the definition of $K_0$.
\subsection{Merge-Tree Algorithm for 0th Persistent Cohomology}
\label{subsec:treemerge-0-pcohom}
The above discussion shows that
only we need to do is to give an algorithm for
computing $0$th persistent cohomology of the dual filtration
In fact, we can efficiently compute
the $0$th cohomologous persistence diagram
using the following merge-tree algorithm.
To simplify the explanation of the algorithm, we assume the following condition.
This condition corresponds to Condition~\ref{cond:ph} for persistent homology.
\begin{cond}\label{cond:cohom}\
\begin{itemize}
\item
$Y = \{\bsigma_1, \ldots \bsigma_K, \bsigma_\infty \}$ is a cell complex and
$Y_k = \{\bsigma_{k+1},\ldots, \bsigma_K, \bsigma_\infty\}$ is a subcomplex of $Y$ for any
$0 \leq k < K$.
\item $\bsigma_{\infty}$ is 0-cell and $Y_{K} = \{\bsigma_{\infty}\}$ is also a
subcomplex of $Y$.
\item $Y$ is connected.
\end{itemize}
\end{cond}
Under the condition, we explain the algorithm to compute
the decomposition of 0th persistence cohomology on the
decreasing filtration
$\mathbb{Y}:Y = Y_0 \supset \cdots \supset Y_{K} = \{\bsigma_\infty\}$.
Algorithm~\ref{alg:tree-0-compute} computes the 0th cohomologous persistence diagram.
In this algorithm, $(V_k, E_k)$ is a graph whose nodes are 0-cells of $Y$ and
whose edges have extra data in $\Zint$.
Later we show this algorithm is applicable for
computing $D_{n-1}(\X)$ using Alexander duality.
\begin{algorithm}[h!]
\caption{Merge-Tree algorithm for the 0th cohomologous PD}\label{alg:tree-0-compute}
\begin{algorithmic}
\Procedure{Compute-Tree}{$\mathbb{Y}$}
\State initialize $V_{K} = \{\bsigma_\infty\}$ and $E_{K} = \emptyset$
\For{$k=K,\ldots,1$}
\If{$\bsigma_k$ is a $0$-simplex}
\State $V_{k-1} \gets V_{k} \cup \{\bsigma_k\},\ E_{k-1} \gets E_{k}$
\ElsIf{$\bsigma_k$ i a $1$-simplex}
\State let $\bsigma_s, \bsigma_t$ are two endpoints of $\bsigma_k$
\State $\bsigma_{s'} \gets \textproc{Root}(\bsigma_s, V_{k}, E_{k})$
\State $\bsigma_{t'} \gets \textproc{Root}(\bsigma_t, V_{k}, E_{k})$
\If{$s'=t'$}
\State $V_{k-1} \gets V_{k},\ E_{k-1} \gets E_{k}$
\ElsIf{$s'> t'$}
\State $V_{k-1} \gets V_{k},\
E_{k-1} \gets E_{k}\cup \{(\bsigma_{t'} \xrightarrow{k} \bsigma_{s'})\}$
\Else
\State $V_{k-1} \gets V_{k},\
E_{k-1} \gets E_{k}\cup \{(\bsigma_{s'} \xrightarrow{k} \bsigma_{t'})\}$
\EndIf
\Else
\State $V_{k-1} \gets V_{k},\ E_{k-1} \gets E_{k}$
\EndIf
\EndFor
\Return $(V_0, E_0)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
This algorithm tracks all $\{(V_k, E_k)\}_{k=0,\ldots,K}$ for the mathematical
proof, but when you implement the algorithm, you do not need to keep the history
and you can directly update the set of nodes and edges.
\begin{theorem}\label{thm:tree0}
The 0th reduced cohomologous persistence diagram $\tilde{D}^0(\mathbb{Y})$
is given as follows:
\begin{align*}
\tilde{D}^0(\mathbb{Y}) = \{(k, s) \mid (\bsigma_s \xrightarrow{k} \bsigma_t) \in E_0\}
\end{align*}
\end{theorem}
To prove the theorem and justify the algorithm, we show some basic facts
about the graph $(V_K, E_k)$ given by the algorithm.
These facts are shown by checking
the edges/nodes adding rule of each step in Algorithm~\ref{alg:tree-0-compute}.
\begin{fact}\label{fact:vk}
$V_k = \{\bsigma_\ell : \text{0-simplex in } Y \mid k < \ell\}$
\end{fact}
Fact~\ref{fact:vk} is obvious from the algorithm.
\begin{fact}\label{fact:graph-is-tree}
For any $k$, $(V_k, E_k)$ is a forest, i.e. a set of trees.
That is, the followings hold:
\begin{itemize}
\item There is no loop in the graph
\item For any node, the number of outgoing edges from the node is zero or one.
\begin{itemize}
\item If the number is zero, the node is a root node
\item If the number is one, the node is a child node
\end{itemize}
\end{itemize}
\end{fact}
We can inductively prove Fact~\ref{fact:graph-is-tree}
since an edge is added between two roots of $(V_{k}, E_{k})$ in the algorithm.
\begin{fact}\label{fact:graph-cc}
The topological connectivity of $Y_k$ is the same as
$(V_k, T_k)$. That is,
$\{\bsigma_{i_1}, \ldots, \bsigma_{i_\ell}\}$ is all 0-simplices
of a connected component in $X_k$ if and only if
there is a tree in $(V_k, E_k)$ whose nodes are
$\{\bsigma_{i_1}, \ldots, \bsigma_{i_\ell}\}$.
\end{fact}
This is because the addition of a node to the graph corresponds
to the addition of a connected component in $\mathbb{Y}$ and
the addition of an edge corresponds to the concatenation
of two connected components.
\begin{fact}\label{fact:tree_order}
If there is a path
$\bsigma_s \xrightarrow{k} \bsigma_t \to \cdots \to \bsigma_{s'} \xrightarrow{k'} \bsigma_{t'}$ in $(V_{k''}, E_{k''})$, the following inequality holds:
\begin{align*}
k' < k < s < s'.
\end{align*}
\end{fact}
\begin{proof}[Proof of Fact~\ref{fact:tree_order}]
The edge $\bsigma_s \xrightarrow{k} \bsigma_t$
is added after $\bsigma_{s'} \xrightarrow{k'} \bsigma_{t'}$ is added
in the algorithm since any edge is added between two root nodes, hence
we have $k'<k$. We also show that $k < s < t$ and $k'<s'<t'$ from the
rule of edge addition and this inequalities hold for any intermediate edge
in the path, so we have $s < s'$. The required inequality comes from
these inequalities.
\end{proof}
The following fact is shown since in the algorithm
each edge is added between two root nodes.
\begin{fact}\label{fact:subtree}
If $\bsigma_s$ is not a root of a tree in $(V_k, E_k)$, the subtree
whose root node is $\bsigma_s$ does not change in the sequence of graphs:
$(V_{k}, E_{k}) \subset \cdots \subset (V_0, E_0)$.
\end{fact}
Using these facts, we set up the 0th persistence cohomology.
We prepare some symbols:
\begin{align*}
R_k &= \{\bsigma_s \mid \bsigma_s \text{ is a root of a tree in } (V_k, E_k)\}, \\
\desc_k(\bsigma_s) &= \{\bsigma_t : \mbox{a descendant node of $\bsigma_s$
in } (V_k, E_k) \mbox{, including $\bsigma_s$ itself}\}, \\
\odesc_k(\bsigma_s) &= \{ \bsigma_t \in \desc_0(\bsigma_s) \mid k \leq t \}, \\
y_s^{(k)} &= \sum_{\bsigma_t \in \desc_k(\bsigma_s)} \bsigma_t^* \in C^0(Y_k),\\
\hat{y}_s^{(k)} &= \sum_{\bsigma_t \in \odesc_k(\bsigma_s)} \bsigma_t^* \in C^0(Y_k),\\
\bar{\varphi}_k^\vee&: C^0(Y_k) \to C^0(Y_{k+1}) \ : \mbox{the induced map of the inclusion map $Y_k \xhookleftarrow{} Y_{k+1}$}.
\end{align*}
We prove the following lemma.
\begin{lem}\label{lem:pcohom-basis}
$\{ \hat{y}_s^{(k)} \mid \bsigma_s \in R_k\}$ is a basis of $Z^0(Y_k) = H^0(Y_k)$.
\end{lem}
\begin{proof}
From Fact~\ref{fact:graph-cc} and the theory of 0th cohomology, we have that
$\{ y_s^{(k)} \mid \bsigma_s \in R_k\}$ are basis of $H^0(Y_k)$. Here, we prove the
following three facts. Then the theory of linear algebra leads
the statement of the lemma.
\begin{enumerate}[(i)]
\item $\#\{ y_s^{(k)} \mid \bsigma_s \in R_k\} = \# \{ \hat{y}_s^{(k)} \mid \bsigma_s \in R_k\} = \#R_k $
\item Any $\hat{y}_s^{(k)}$ for $\bsigma_s \in R_k$ is a linear
sum of $\{ y_s^{(k)} \mid \bsigma_s \in R_k\}$
\item $\{ \hat{y}_s^{(k)} \mid \bsigma_s \in R_k\}$ are linearly independent.
\end{enumerate}
(i) is trivial. We show (ii). We can write
$\hat{y}_s^{(k)}$ explicitly by using the two graphs $(V_{k}, E_k)$ and
$(V_0, E_0)$ by the following way.
Let $R_k(\bsigma_s)$ be
\begin{align*}
R_k(\bsigma_s) = \{\bsigma_t \in R_k \mid \bsigma_t \mbox{ is a descendant of }
\bsigma_s \mbox{ in } (V_0, E_0), \mbox{ including $\bsigma_s$ itself} \}.
\end{align*}
Then we write $\hat{y}_s^{(k)} = \sum_{\bsigma_t \in R_k(\bsigma_s)} x_t^{(k)}$.
We can show the equation from the followings.
\begin{itemize}
\item The family $\{ \desc_k(\bsigma_s) \mid \bsigma_s \in R_k \}$
is pairwise disjoint.
\item $\odesc_k(\bsigma_s) = \bigsqcup_{\bsigma_t \in R_k(\bsigma_t)} \desc_k(\bsigma_t)$
\end{itemize}
The first one comes from Fact~\ref{fact:graph-cc}. Next
$\odesc(\bsigma_s) \supset \bigsqcup_{\bsigma_t \in R_k(\bsigma_t)} \desc_k(\bsigma_t)$
is shown.
Pick any $\bsigma_u \in \desc_k(\bsigma_t)$ with $\bsigma_t \in R_k(\bsigma_s)$.
Then there are a path $\bsigma_u \to \cdots \to \bsigma_t$ in $(V_k,E_k)$ and
a path $\bsigma_t \to \cdots \to \bsigma_s$ in $(V_0, E_0)$. Since $(V_k, E_k)$ is
a subgraph of $(V_0, E_0)$, there is a path from $\bsigma_u$ to $\bsigma_s$
in $(V_0, E_0)$ through $\bsigma_t$ and this means that $\bsigma_u \in \odesc_k(\bsigma_s)$. To show the inverse inclusion relation, we pick any $\bsigma_u$ in $\odesc_k(\bsigma_s)$.
Since $\bsigma_u \in V_k$, there is $\bsigma_t \in R_k$ such that
$\bsigma_u \in \desc_k(\bsigma_t)$. There are a path
$\bsigma_u \to \cdots \to \bsigma_s \in (V_0, E_0)$ and
$\bsigma_u \to \cdots \to \bsigma_t \in (V_k, E_k)$. Since $(V_k, E_k)$ is a subgraph
of $(V_0, E_0)$ and there is a unique path from a node to a root node in a tree,
there is always the node $\bsigma_t$ in the path
$\bsigma_u \to \cdots \to \bsigma_s \in (V_0, E_0)$ and it means that
$\bsigma_t \in R_k(\bsigma_s)$. We prove
$\odesc_k(\bsigma_s) = \bigsqcup_{\bsigma_t \in R_k(\bsigma_t)} \desc_k(\bsigma_t)$.
We will show (iii). $R_k$ is ordered as $\{\bsigma_{s_1}, \ldots \bsigma_{s_m}\}$ with
$s_1 < \ldots < s_m $.
Assume that
\begin{align}
\sum_{j=1}^m\lambda_j \hat{y}_{s_j}^{(k)} = 0 \label{eq:cohomindep_assumption}
\end{align}
where $\lambda_j \in \Zint_2$ and we show $\lambda_j = 0$ for all $j$.
Now we consider the equation
$\sum_{j=1}^m\lambda_j \hat{y}_{s_j}^{(k)}(\bsigma_{s_m}) = 0$
by applying \eqref{eq:cohomindep_assumption} to $\bsigma_{s_m}$.
Obviously, $\hat{y}_{s_m}^{(k)}(\bsigma_{s_m}) = 1$ since
$\bsigma_{s_m} \in \odesc_k(\bsigma_{s_m})$ and
$\hat{y}_{s_j}^{(k)}(\bsigma_{s_j}) = 0$ for any $1 \leq j < m$ since
$\bsigma_{s_m} \not \in \odesc_k(\bsigma_{s_j})$ from Fact~\ref{fact:tree_order}.
Therefore we have $\lambda_m = 0$. Repeatedly we can show
$\lambda_{m-1} = \cdots = \lambda_{1} = 0$ in the same way and (iii) is shown.
\end{proof}
The following lemma
is easy to show from the definition of the map.
\begin{lem}\label{lem:include-cohom}
The map $\bar{\varphi}_k^\vee$ satisfies the following:
\begin{align*}
\bar{\varphi}_k^\vee(\hat{y}_s^{(k)}) = \hat{y}_s^{(k+1)}.
\end{align*}
\end{lem}
We also show the following lemma.
\begin{lem}\label{lem:pcohom-birthdeath}
If $(\bsigma_s \xrightarrow{k} \bsigma_t) \in E_0$, the followings hold:
\begin{enumerate}[(i)]
\item $\hat{y}_s^{(u)} \not \in Z^0(Y_{k})$ for $u \leq k$
\item $\hat{y}_s^{(u)} \in Z^0(Y_{k+1})$ for $k+ 1 \leq s$
\item $\hat{y}_s^{(u)} \not = 0$ for $u \leq s$
\item $\hat{y}_s^{(u)} = 0$ for $u \geq s+1$
\end{enumerate}
\end{lem}
\begin{proof}
Since $\hat{y}_s^{(u)} $ is an element of basis of $Z^0(Y_u)$ due to Lemma~\ref{lem:pcohom-basis}
for $k+1 \leq u \leq s$ , we have (ii).
From Fact~\ref{fact:tree_order}, we have
$\desc_0(\bsigma_s) \subset \{\bsigma_1, \ldots, \bsigma_s\}$ and so
$\odesc_{s+1}(\bsigma_s) = \emptyset$, therefore (iv) is true.
Since $\bsigma_s \in \odesc_u(\bsigma_s)$ for any $u \leq s$ from the definition
of $\odesc_u(\bsigma)$, we have (iii).
From the theory of 0th cohomology,
$\hat{y}_s^{(u)} \in Z^0(Y_{k})$ if and only if
$\odesc_{u}(\bsigma_s)$ is a finite union of connected components.
However,
from Fact~\ref{fact:subtree},
\begin{align*}
\odesc_{u}(\bsigma_s) = \desc_{u}(\bsigma_s) \text{ for } u \leq k
\end{align*}
from Fact~\ref{fact:graph-cc} this set is a proper subset of $\desc_{u}(\bsigma_v)$
where $\bsigma_v$ is the root of the tree which has $\bsigma_s$ as a node. Therefore
we have (i).
\end{proof}
The following theorem is required for the treatment of reduced persistent cohomology.
\begin{lem}\label{lem:cohom-reduced}\
\begin{enumerate}[(i)]
\item $(V_0, E_0)$ is a single tree.
\item The root of the single tree is $\bsigma_\infty$
\item $\bsigma_\infty$ is a root of a tree in $(V_k, E_k)$ for any $k$.
\item $\hat{y}_{\infty}^{(k)} = \sum_{\bsigma_u:\textrm{0-simplex}, u> k} \bsigma_u^*$
\item $\tilde{H}^0(Y_k) = H^0(Y_{k}) /\left<\hat{y}_{\infty}^{(k)}\right>$
\item $\{[y_s^{(k)}]_{\left<\hat{y}_\infty^{(k)}\right>}\mid s\not = \infty,
\sigma_s \in R_k\}$ and
$\{[\hat{y}_s^{(k)}]_{\left<\hat{y}_\infty^{(k)}\right>}\mid s\not = \infty,
\sigma_s \in R_k\}$
are two bases of $\tilde{H}^0(Y_k)$
\end{enumerate}
\end{lem}
\begin{proof}
(i) comes from the connectivity of $Y$ in Condition~\ref{cond:cohom} and
Fact~\ref{fact:graph-cc}. (ii) and (iii) comes from Fact~\ref{fact:tree_order}.
(iv) comes from the definition of $\hat{y}_{\infty}^{(k)}$ and (ii).
(v) comes from (iv) and from the definition of reduced cohomology
\eqref{eq:cohomology0}.
Finally, we conclude (vi) by (i-v).
\end{proof}
Lemma~\ref{lem:pcohom-basis}, \ref{lem:include-cohom}, \ref{lem:pcohom-birthdeath},
and \ref{lem:cohom-reduced}
lead Theorem \ref{thm:tree0}.
\subsection{Merge-tree algorithm for $(n-1)$-th persistent homology}
\begin{proof}[Proof of Theorem~\ref{thm:vochd-alg}(i)]
We prove Theorem~\ref{thm:vochd-alg}.
Theorem~\ref{thm:vochd-alg}(i) is the direct result of
Theorem~\ref{thm:tree0} and
$\tilde{H}_{\ell-1}(\X) \simeq \tilde{H}^{n-\ell}(\bar{\X})$
by applying Algorithm~\ref{alg:tree-0-compute} to
$\X^+: X_0 \subset X_1 \subset \cdots\subset X_K $ in $S^n$
and its dual decomposition
$\bar{\X}^+:\bar{X}_0 \supset \cdots \supset \bar{X}_K = \{ \bsigma_\infty\}$.
To apply Theorem~\ref{thm:tree0}, we check
$\bar{X}_0 = \{\bsigma_1, \ldots, \bsigma_K, \bsigma_\infty\}$ is connected
and it is true since the dual decomposition is also a decomposition of $S^n$.
\end{proof}
\begin{proof}[Proofs of Theorem~\ref{thm:vochd-unique} and Theorem~\ref{thm:vochd-alg}(ii)]
We show that $x_b^{(d)} = \sum_{\bsigma_t \in \desc_b(\bsigma_d)} \sigma_t$
is a persistent volume for a birth-death pair $(b, d)$. \eqref{eq:vc-1} is shown by
$\desc_b(\bsigma_d) \subset \{\bsigma_{b+1}, \ldots, \bsigma_d\}$ from Fact~\ref{fact:tree_order} and $\bsigma_d \in \desc_b(\bsigma_d)$ from the definition of $\desc_b(\bsigma_d)$.
\eqref{eq:vc-2} and \eqref{eq:vc-3} is shown from
Lemma~\ref{lem:pcohom-birthdeath}(i) and (ii), and \eqref{eq:ccboundary}.
To prove the optimality of $x_b^{(d)}$, we show the following claim.
\begin{claim}
If $x$ is a persistent volume of $(b, d)$, $x_b^{(d)} \subset x$ holds.
\end{claim}
The claim immediately leads Theorem~\ref{thm:vochd-unique}
and Theorem~\ref{thm:vochd-alg}(ii).
From Theorem~\ref{thm:good_voc}, $[\partial x]_b$ is well defined and
nonzero. By the isomorphism for Alexander duality and
Lemma~\ref{lem:cohom-reduced}(vi),
there is $R \subset R_b$
such that
\begin{align*}
[\partial x]_b &= \Theta(\sum_{\sigma_s \in R}
[y_s^{(b)}]_{\left<\hat{y}_\infty^{(b)}\right>}), \\
\bsigma_\infty & \not \in R.
\end{align*}
From the relation \eqref{eq:ccboundary} and the definition of $x_s^{(b)}$,
\begin{align*}
[\partial x_s^{(b)}] = \Theta(
[y_s^{(b)}]_{\left<\hat{y}_\infty^{(b)}\right>})
\end{align*}
for any $s$ with $\sigma_s \in R_k$. Therefore
\begin{align*}
[\partial x]_b = \sum_{\sigma_s \in R} [\partial x_s^{(b)}]_b,
\end{align*}
and hence
\begin{align*}
\partial x + \sum_{\sigma_s \in R} \partial x_s^{(b)} \in B_{n-1}(X_{b}),
\end{align*}
so there exists $w \in C_{n}(X_b)$ such that
\begin{align*}
\partial(x + \sum_{\sigma_s \in R} x_s^{(b)} + w) = 0
\end{align*}
holds. Since $X_b$ is a simplicial complex embedded in $\R^n$, $Z_n(X_b) = 0$ and
\begin{align*}
x + \sum_{\sigma_s \in R} x_s^{(b)} + w = 0.
\end{align*}
Since $x, x_s^{(b)} \in \left<\sigma_k: n\text{-simplex} \mid b < k \leq d \right>$ and
$w \in C_n(X_b) = \left<\sigma_k: n\text{-simplex} \mid k < b \right>$,
we have $w = 0$ and
\begin{align*}
x = \sum_{\sigma_s \in R} x_s^{(b)}.
\end{align*}
From \eqref{eq:vc-1}, $x$ has always $\sigma_d$ term and so
$\sigma_d \in R$ and finish the proof of the claim.
\end{proof}
Theorem~\ref{thm:vochd-tree} and Theorem~\ref{thm:vochd-alg}(iii) are immediately
come from the definition of $x_d^{(b)}$ and properties of the tree structure shown in
Section~\ref{subsec:treemerge-0-pcohom}.
\section*{Acknowledgements}
Dr. Nakamura provided
the data of the atomic configuration of amorphous silica used in the example.
Dr. Escolar provided the computation software for optimal cycles
on persistent homology. I thank them.
This work is partially supported by
JSPS KAKENHI Grant Number JP 16K17638,
JST CREST Mathematics15656429, and
Structural Materials for Innovation, Strategic Innovation Promotion Program
D72.
\bibliographystyle{unsrt}
\bibliography{voc}
\end{document} | {"config": "arxiv", "file": "1712.05103/main.tex"} |
TITLE: "Problems worthy of attack prove their worth by fighting back.”
QUESTION [4 upvotes]: That is quote has been attributed to Piet Hein,
inventor of the Soma cube,
which is how I know of him.
Q. Is the attribution correct?
I wonder because the quote has a nice ring in English that it might not have in Danish,
his native language.
REPLY [1 votes]: Tæv uforknyt løs på problemerne, men vær forberedt på, at de tæver
igen.
It's a Piet Hein quote alright. It doesn't have quite the same ring to it in Danish. | {"set_name": "stack_exchange", "score": 4, "question_id": 1448830} |
In this section we shall state and prove
some results on the metric properties of a
smooth Fibonacci map. First we shall quickly state
the main tool needed for these estimates.
\subsection{The cross-ratio tool and the Koebe Principle}
Let $j\subset t$ be intervals and let $l,r$ be the components
of $t\setminus j$. Then
the cross-ratio of this pair of intervals is defined
as
$$C(t,j):=\frac{|t|}{|l|}\frac{|j|}{|r|}.$$
Let $f$ be a smooth function mapping
$t,l,j,r$ onto $T,L,J,R$ diffeomorphically.
Define
\[
B(f,t,j)=\frac{|T|\, |J|}{|t|\, |j|}\,\frac{|l|\, |r|}{|L|\, |R|}\
=\frac{C(T,J)}{C(t,j)}
.
\]
It is well known that if the {\it Schwarzian derivative} of $f$, i.e.,
$Sf=f'''/f'-3(f''/f')^2/2$, is negative then
$B(f,t,j)\ge 1$. It is easy to check that our map
$f(z)=z^\ell+c_1$ satisfies $Sf(x)<0$ for $x\in \rz$.
We say that a set $t\subset \rz^k$
contains a {\it $\tau$-scaled} neighbourhood of a disc
$j\subset \rz^k$ with midpoint $x$ and radius $r$
if $t$ contains the ball around $x$ with radius $(1+\tau)r$.
\bigskip
\begin{prop} [Real Koebe Principle]
Let $Sf<0$. Then for any intervals $j\subset t$ and any $n$
for which $f^n|t$ is a diffeomorphism one has
the following.
If $f^n(t)$ contains a $\tau$-scaled
neighbourhood of $f^n(j)$
then
\beq
\label{koebee}
\frac{|Df^n(x)|}{|Df^n(y)|}\le
\left[\frac{1+\tau}{\tau}\right]^2
\eeq
for each $x,y\in j$.
Moreover, there exists a universal function $K(\tau)>0$
which does not depend on $f$, $n$ and $t$
such that
$$|l|,|r|\ge K(\tau)\cdot |j|.$$
\end{prop}
\subsection{The bounds}
Bounds on the relative position of the points $u_n$
and $d_n=c_{S_n}$ are essential in this paper.
They are given in the following theorem.
(All the results in this section
also hold if $f$ is a $C^2$ Fibonacci map
using the disjointness statements as in
\cite{BKNS}.)
\bigskip
\begin{theo}[The real bounds]
\label{realbounds}
There exists $\ell_0\ge 4$ such that
if $f$ is a real unimodal Fibonacci map with a critical
point of order $\ell\ge \ell_0$ with $Sf<0$ then one there exist
universal constants
$0<\lambda<\mu\in (0,1)$
such that the ratio between two consecutive terms
$$|d_{n+1}^f-c_1|<|u_n^f-c_1|< |z_{n-1}^f-c_1|<|d_n^f-c_1|$$
is between $\lambda$ and $\mu$ for all $n$ sufficiently large.
In fact, all the distances in the bottom
part of Figure~\ref{mappp} are of the same order. From this
it follows that the distances near $c$ as stated in the
caption of this figure.
Moreover,
$$\frac{|d_{n-2}^f-c_1|}{|d_n^f-c_1|}\ge 3.85 $$
and therefore
$$\frac{|d_{n-4}^f-c_1|}{|d_n^f-c_1|}\ge 14 $$
for all $n$ sufficiently large.
\end{theo}
\pr The last two inequalities can be found in \cite{fibo}
and also in Lemma 3.3 in \cite{BKNS}.
In Theorem 3.1 of \cite{BKNS} it is shown that
$$\frac{|d^f_n-c_1|}{|u_n^f-c_1|},
\frac{|d^f_n-c_1|}{|d_{n+1}^f-c_1|}\text{ and }
\frac{|u^f_n-c_1|}{|u_{n+1}^f-c_1|}
$$
are bounded and bounded away from one.
Hence there exists uniform constants $C_1,C_2$ such that
$$\frac{C_1}{\ell}
\le
\frac{|d_n-u_n|}{|u_n-c|},
\frac{|d_n-c|-|d_{n+1}-c|}{|d_n-c|},
\frac{|u_n-c|-|u_{n+1}-c|}{|u_n-c|}
\le \frac{C_2}{\ell}$$
for all $n$ large.
From this, by considering the
map drawn in Figure~\ref{mappp} and by
the Koebe Principle one obtains that all distances
are comparable is size. For example,
these inequalities imply that
$[d_{n-2},c]$ is a uniformly scaled neighbourhood of $[u_{n-2},d_{n+2}]$
and by Koebe it follows that
$[z_{n-1}^f,z_n^f]$ is also a scaled neighbourhood of
$[u_n^f,d_{n+1}^f]$. Hence
$$\frac{|z_{n-1}^f-c_1|}{|u_n^f-c_1|}\text{ and }
\frac{|d_{n+1}^f-c_1|}{|z_n^f-c_1|}.$$
are both bounded away from one. Continuing in this way
the proposition follows.
\qed
\bigskip
In fact, we should remark that the last theorem holds for $\ell_0 =4$.
We shall not need this however, and
since the necessary real bounds are only proved in
\cite{BKNS} for $\ell_0$ sufficiently large we only claim the
existence of such an integer $\ell_0$.
We should point out that the previous theorem is false
if $\ell=2$. In that case, $|u_n^f-c_1|/|u_{n+1}^f-c_1|$
goes exponentially fast to infinity, see
\cite{LM} and \cite{fibo}.
\begin{figure}[htp]
\vskip 0.7cm
\hbox to \hsize{\hss\unitlength=5mm
\beginpic(20,4)(-20,0) \let\ts\textstyle
\put(4,4){\line(-1,0){30}}
\put(4,3.8){\line(0,1){0.4}} \put(4.2,4.8){$d_{n-2}$}
\put(2.7,3.8){\line(0,1){0.4}} \put(2.5,4.8){$u_{n-2}$}
\put(1.3,3.8){\line(0,1){0.4}} \put(1,4.8){$u_{n-1}$}
\put(-0.2,3.8){\line(0,1){0.4}} \put(-0.4,4.8){$\hat u_{n}$}
\put(-2.2,3.8){\line(0,1){0.4}} \put(-2.8,4.8){$d_{n+2}$}
\put(-5,3.8){\line(0,1){0.4}}
\put(-5.05,3.8){\line(0,1){0.4}}
\put(-5.3,4.8){$c$}
\put(-7,3.8){\line(0,1){0.4}} \put(-7.2,4.8){$d_{n+4}$}
\put(-10,3.8){\line(0,1){0.4}} \put(-10.2,4.8){$u_{n}$}
\put(-12.5,3.8){\line(0,1){0.4}} \put(-12.7,4.8){$y_{n}$}
\put(-15,3.8){\line(0,1){0.4}} \put(-15.3,4.8){$z_{n-1}$}
\put(-18,3.8){\line(0,1){0.4}} \put(-18.2,4.8){$d_{n}$}
\put(-20.4,3.8){\line(0,1){0.4}} \put(-20.6,4.8){$z_{n-2}$}
\put(-22.5,3.8){\line(0,1){0.4}} \put(-22.7,4.8){$\hat u_{n-1}$}
\put(-24,3.8){\line(0,1){0.4}} \put(-24.3,4.8){$\hat u_{n-2}$}
\put(-26,3.8){\line(0,1){0.4}} \put(-26.3,4.8){$d_{n-4}$}
\put(6,0.2){\line(-1,0){32}}
\put(6,0){\line(0,1){0.4}} \put(5.8,-0.8){$u_{n-1}^f$}
\put(4,0){\line(0,1){0.4}} \put(3.8,-0.8){$z_{n-1}^f$}
\put(2.7,0){\line(0,1){0.4}} \put(2.5,-0.8){$u_{n}^f$}
\put(1.3,0){\line(0,1){0.4}} \put(1,-0.8){$v_n^f$}
\put(-0.2,0){\line(0,1){0.4}} \put(-0.7,-0.8){$x_{n+1}^f$}
\put(-2.2,0){\line(0,1){0.4}} \put(-2.8,-0.8){$d_{n+1}^f$}
\put(-5,0){\line(0,1){0.4}} \put(-5.2,-0.8){$z_{n}^f$}
\put(-7,0){\line(0,1){0.4}} \put(-7.2,-0.8){$y_{n+1}^f$}
\put(-10,0){\line(0,1){0.4}} \put(-10.2,-0.8){$u_{n+1}^f$}
\put(-12.5,0){\line(0,1){0.4}} \put(-12.7,-0.8){$d_{n+2}^f$}
\put(-15,0){\line(0,1){0.4}} \put(-15.2,-0.8){$z_{n+1}^f$}
\put(-18,0){\line(0,1){0.4}} \put(-18.2,-0.8){$c_1$}
\put(-20.4,0){\line(0,1){0.4}} \put(-20.6,-0.8){$t_{n+1}^f$}
\put(-22.5,0){\line(0,1){0.4}} \put(-22.7,-0.8){$w_{n}^f$}
\put(-24,0){\line(0,1){0.4}} \put(-24.2,-0.8){$r_{n}^f$}
\put(-26,0){\line(0,1){0.4}} \put(-26.2,-0.8){$t_{n}^f$}
\put(-18,1.5){\vector(0,1){1.7}}
\put(-16.5,2){$f^{S_{n}-1}$ }
\endpic\hss}
\label{mappp}
\vskip 3mm
\caption[ ]{{\small
In the top figure the actual scaling
is completely different for large $\ell$: $|d_{n+2}-c|/|d_{n+4}-c|$ is of order
$1-C\frac{1}{\ell}$ whereas the mutual distance of all points
in the top figure on one component of $\rz\setminus \{c\}$
is of order $(C/\ell)|d_{n+2}-c|$.
All the distances between the marked points in the bottom figure
(which shows the situation near $c_1$) are of the same order.}}
\end{figure}
Let $T_n=(z_{n-1}^f,t_{n-1}^f)$ be the maximal interval
containing $c_1$ on which $f^{S_n-1}$ is a diffeomorphism
and let $w_n^f\in T_n$ be so that $f^{S_n}(w_n^f)=u_{n-1}^f$.
Then we have the following estimate,
see also Figure~\ref{nsf_qcr2}. This estimate will be needed in
Section~\ref{sectqcr}.
\begin{prop} [Bounds near $c_1$]
\label{43ineqprop}
There exists $\ell_0\ge 4$ such that
if $f$ is a real unimodal Fibonacci map with a critical
point of order $\ell\ge \ell_0$ and $Sf<0$ then
$$\frac{|u_{n-1}^f-c_1|}{|w_n^f-c_1|}\ge \frac{4}{3}$$
for all $n$ sufficiently large.
\end{prop}
\pr
To prove this proposition we use the following lemma.
\begin{lemma}
Let $J'\subset J\subset T$ be intervals
on which $f$ is a diffeomorphism and assume that $Sf<0$.
Then
\beq
\label{growcrr}
B(f,T,J)\ge B(f,T,J').
\eeq
Furthermore, if $f(x)=x^\ell$,
$T=[0,\gamma]$ and $J=[\alpha,\beta]\subset T$
then
$$B(f,T,J)\ge \ell (1-\frac{\alpha}{\gamma}).$$
\end{lemma}
\pr
We may assume that one boundary of $J'$ coincides with one boundary of
$J$ (by applying the lemma twice in this situation
we get the lemma also for general intervals $J$).
Let $L'$ and $R'$ be the components of $T\setminus J'$ which
are labeled so that $R'$ and $R$ both lie on the right hand side of $J$
and $J'$. In order to be definite, assume that the left endpoints of
$J'$ and $J$ coincide. This means that $L'=L$.
It follows that (\ref{growcrr}) is equivalent
to
$$\frac{|f(J)||f(R')|}{|f(R)||f(J)|} \ge
\frac{|J||R'|}{|R||J|}.$$
If we define $\hat T=J\cup R$, $\hat L=J'$, $\hat J=J\setminus J'$
and $\hat R=R'$ then this last inequality becomes
$$
\frac{|f(\hat L\cup \hat J)||f(\hat J \cup \hat R)|}
{|f(\hat L)||f(\hat R)|}
\ge
\frac{|\hat L\cup \hat J||\hat J \cup \hat R|}
{|\hat L||\hat R|}
$$
which is equivalent to the usual cross-ratio expansion:
$$\frac{|f(\hat J)||f(\hat T)|}
{|f(\hat L)||f(\hat R)|}
\ge
\frac{|\hat J||\hat T|}
{|\hat L||\hat R|}.$$ This completes the proof of the first
part of the lemma.
It follows from the first part that we may assume that
$\beta=\alpha$.
Since $f(x)=x^\ell$,
$$B(f,(0,\gamma),\{\alpha\})=
\frac{\gamma^\ell}{\gamma}\cdot \ell\alpha^{\ell-1}
\cdot \frac{\alpha}{\alpha^\ell}\cdot
\frac{\gamma-\alpha}{\gamma^\ell-\alpha^\ell}=
\ell(1-\frac{\alpha}{\gamma})\cdot
\frac{\gamma^\ell}{\gamma^\ell-\alpha^\ell}
\ge \ell(1-\frac{\alpha}{\gamma}).$$
This completes the proof of this lemma.
\qed
\noindent
{\em Proof of Proposition \ref{43ineqprop}:}
Now we can prove the previous proposition.
\beqas
&&
B\left(f^{S_n}, (t_n^f,z_n^f),(c_1,w_n^f)\right)\\
&&\quad =
B\left(f^{S_n-1},(t_n^f,z_n^f),(c_1,w_n^f)\right)
\cdot B\left(f,(d_{n-4},c),(d_n,\hat u_{n-1})\right)
\\
&&\quad \ge 1\cdot \ell (1-(\frac{|d_n^f-c_1|}{|d_{n-4}^f-c_1|})^{1/\ell})
\ge
\ell ( 1- (\frac{1}{14})^{1/\ell})
\ge 4 ( 1- (\frac{1}{14})^{1/4})> 1.9
\eeqas
where we have used the previous lemma, the inequality
from Theorem~\ref{realbounds} and $\ell\ge 4$.
Now $f^{S_n}(t_n)=d_{n-4}$, $f^{S_n}(z_n)=c_1$,
$f^{S_n}(c_1)=d_n^f$, $f^{S_n}(w_n^f)=u_{n-1}^f$.
Rewriting this last inequality and using the order structure
of the points on the real line,
gives
\beqas
\frac{|u_{n-1}^f-c_1|}{|w_n^f-c_1|}
&\ge & 1.9 \cdot
\frac{|d_{n-4}^f-u_{n-1}^f|}{|d_{n-4}^f-c_1|}
\cdot
\frac{|u_{n-1}^f-c_1|}{|u_{n-1}^f-d_n^f|}
\cdot
\frac{|d_n^f-c_1|}{|z_n^f-c_1|}
\cdot
\frac{|t_n^f-z_n^f|}{|t_n^f-w_n^f|}
\\
&\ge & 1.9 \cdot
\frac{|d_{n-4}^f-d_{n-2}^f|}{|d_{n-4}^f-c_1|}
\cdot
1 \cdot 1 \cdot 1
\ge 1.9 \cdot \left(1-\frac{1}{3.85}\right)\ge \frac{4}{3}.
\eeqas
\qed
The next bounds require that we already know the map
satisfies some renormalization properties,
and is used in Section~\ref{nested} to prove
that certain discs really lie nested.
\medskip
\begin{prop} [Improved bounds near $c_1$
if renormalization holds]
\label{boundifreno}
If $\ell\ge \ell_0$, $f$ is as above and
\beq
\lim_{n\to \infty } \,\,\, \frac{|d_n-c|/|d_{n-2}-c|}
{|d_{n-2}-c|/|d_{n-4}-c|}\to 1,
\label{renolimits}
\eeq
then we have the following property.
If $z_{n-1}^f<l_n^f<c_1<s_n^f<t_n^f$ are so that
$$|d_n-c|<|f^{S_n-1}(l_n^f)-c|=|f^{S_n-1}(s_n^f)-c|$$
then
$$\liminf_{n\to \infty}\frac{|l_n^f-c_1|}{|s_n^f-c_1|}\ge 1$$
Moreover, (\ref{renolimits}) implies that
if we take $l_n^f=u_n^f$ and
$r_n^f\in (c_1,t_n^f)\subset T_n$ so that
$f^{S_n}(r_n^f)=\hat u_{n-2}$, then
$|f^{S_n}(r_n^f)-c|=|u_{n-2}-c|=|f^{S_n-1}(u_n^f)-c|$ and
$$\liminf_{n\to \infty} \frac{|u_n^f-c_1|}{|r_n^f-c_1|}>1.$$
\end{prop}
\pr
Consider $f^{S_n-1}$ on $t=(l_n^f,t_n^f)$
and let $j=(c_1,s_n^f)$, $l=(l_n^f,c_1)$
and $r=(s_n^f,t_n^f)$.
\begin{figure}[htp]
\hbox to \hsize{\hss\unitlength=6mm
\beginpic(20,8)(-20,1) \let\ts\textstyle
\put(-7,7){\line(-1,0){13}}
\put(-10,6.8){\line(0,1){0.4}} \put(-10.2,6.3){\small $z_{n-1}^f$}
\put(-13,6.8){\line(0,1){0.4}} \put(-13.2,6.3){\small $l_n^f$}
\put(-15,6.8){\line(0,1){0.4}} \put(-15.2,6.3){\small $c_1$}
\put(-17,6.8){\line(0,1){0.4}} \put(-17.2,6.3){\small $s_n^f$}
\put(-19,6.8){\line(0,1){0.4}} \put(-19.2,6.3){\small $t_n^f$}
\put(-19.05,6.8){\line(0,1){0.4}}
\put(-17.05,6.8){\line(0,1){0.4}}
\put(-15.05,6.8){\line(0,1){0.4}}
\put(-13.05,6.8){\line(0,1){0.4}}
\put(-13.9,7.3){\small $l$} \put(-16,7.3){\small $j$} \put(-18,7.3){\small $r$}
\put(0,3){\line(-1,0){17}}
\put(-0.2,2.8){\line(0,1){0.4}} \put(-0.4,2){\small $d_{n-2}$}
\put(-3,2.8){\line(0,1){0.4}} \put(-3.8,2){\small $f^{S_n-1}(l_n^f)$}
\put(-6,2.8){\line(0,1){0.4}} \put(-6.2,2){\small $d_n$}
\put(-10,2.8){\line(0,1){0.4}} \put(-10.6,2){\small $f^{S_n-1}(s_n^f)$}
\put(-13,2.8){\line(0,1){0.4}} \put(-13.2,2){\small $d_{n-4}$}
\put(-2.958,2.8){\line(0,1){0.4}}
\put(-5.958,2.8){\line(0,1){0.4}}
\put(-9.958,2.8){\line(0,1){0.4}}
\put(-12.958,2.8){\line(0,1){0.4}}
\put(-3.8,3.3){\small $L$} \put(-8,3.3){\small $J$} \put(-11.5,3.3){\small $R$}
\put(-11,6){\vector(1,-1){2}} \put(-8,5){\small $f^{S_n-1}$}
\endpic\hss}
\caption[ ]{{\small The proof of Proposition~\ref{boundifreno}}}
\end{figure}
\bigskip
Write $a=|f^{S_n-1}(l_n^f)-c|=|f^{S_n-1}(s_n^f)-c|$.
Then $|T|=|d_{n-4}-c|+a$, $|L|=a$, $|J|=a$
and $|R|=|d_{n-4}-c|-a$.
Using the cross-ratio inequality gives
\beqas
\frac{|l_n^f-c_1|}{|r_n^f-c_1|}
&=&
\frac{|j|}{|l|}
\ge
\frac{|L|}{|T|}\frac{|R|}{|J|}
=
\left(\frac{|d_n-c|+a}{|d_{n-4}-c|+a}\right)
\left(\frac{|d_{n-4}-c|-a}{a-|d_n-c|}\right)\\
&\ge&
\left(\frac{|d_n-c|+|d_{n-2}-c|}{|d_{n-4}-c|+|d_{n-2}-c|}\right)
\left(\frac{|d_{n-4}-c|-|d_{n-2}-c|}{|d_{n-2}-c|-|d_n-c|}\right)
\to 1\text{ as }n\to \infty .
\eeqas
Here we have used that the fourth expression is decreasing
in $a\in (0,|d_{n-2}-c|)$ and in the last limit that
(\ref{renolimits}) holds.
To prove the last assertion of the proposition,
note that because of Proposition~\ref{realbounds},
$\limsup_{n\to \infty}\frac{|u_{n-2}-c|}{|d_{n-2}-c|}<1$.
Hence in the second inequality above one has in fact a gain
by a factor which is uniformly strictly larger than one.
\qed
\endinput
\begin{prop}
We have the following estimates.
Consider the $f^{S_n}-1$ near the critical value $c_1\in (z_{n-1}^f,t^f)$.
Let $\eta^f\in(z_{n}^f,c_1)$, $u^f\in(z_{n-1}^f,c_1)$
and $\bar u^f\in (c_1,t^f)$ such that $f^{S_n}(u^f)=f^{S_n}(\bar u^f)$,
i.e. $f^{S_n-1}(u^f)=\widehat{f^{S_n-1}(\bar u^f)}$.
Then
\beqa
Df^{S_n}(\eta^f)&\le&\O(1)
\ell\frac{|\c{n-4}-\c{n}|}{|\c{n-4}-f^{S_n-1}(\eta^f)|}
\frac{|\c{n}-f^{S_n-1}(\eta^f)|}{|f^{S_n-1}(\eta^f)-c|}
\frac{|f^{S_n}(\eta^f)-c_1|}{|\eta^f-c_1|}
\\
Df^{S_n}(c_1)&\le&\O(1)\ell\frac{|\c{n}-f^{S_n-1}(\eta^f)|}{|\c{n}-c|}
\frac{|\c{n}-c|}{|f^{S_n-1}(\eta^f)-c|}
\frac{|\c{n}^f-c_1|}{|\eta^f-c_1|}
\\
Df^{S_n}(c_1)&\ge&\O(1)\ell\frac{|\c{n-4}-\c{n}|}{|\c{n-4}-c|}
\frac{|\c{n}^f-c_1|}{|z_{n}^f-c_1}
\\
\frac{|\bar u^f -c_1|}{|u^f-c_1|}&\le&
\frac{|f^{S_n-1}(\bar u^f)-\c{n}|}{|f^{S_n-1}(u^f)-\c{n}|}
\frac{|f^{S_n-1}(u^f)-\c{n-4}|}{|f^{S_n-1}(\bar u^f)-\c{n-4}|}
\eeqa
\end{prop} | {"config": "arxiv", "file": "math9402215/ns_est.tex"} |
TITLE: Inequality deduced from relatively prime numbers.
QUESTION [3 upvotes]: If $a_n \text{ and }b_n$ are relatively prime for all $n$ and
$$\frac{a_n}{b_n}=\frac{1}{n}+\frac{1}{n(n+1)}+\frac{1}{n(n+1)(n+2)}+\cdots$$
Deduce that
$$b_n\geq b_{n+1}$$
CURRENT THOUGHTS
I can show that
$$\frac{a_{n+1}}{b_{n+1}}=\frac{na_n-b_n}{b_n}$$
and making $b_n$ the subject
$$b_n=(\frac{na_n}{a_{n+1}+b_{n+1}})b_{n+1}$$
so it would suffice to show that
$$na_n\geq a_{n+1}+b_{n+1}$$
which would appear to be true in general as $n$ gets large. But other than this, I am unsure how to proceed?
REPLY [1 votes]: I believe this statement naturally follows from the fact that $a_n$ and $b_n$ are defined to be coprime for all $n$.
i.e.
If
$$\frac{a_{n+1}}{b_{n+1}}=\frac{na_n-b_n}{b_n}$$
Then we must have that $b_n \geq b_{n+1}$ otherwise $a_{n+1}$ and $b_{n+1}$ wouldn't be coprime.
Hope this helps!
If I am not mistaken, the next steps in this proof will be to conclude that
$$\frac{a_n}{b_n}>\frac{a_{n+1}}{b_{n+1}}\Rightarrow a_{n+1}<a_n$$
And we obtain a contradiction by
$$a_1>a_2>a_3>\cdots >0$$
Since it is not possible to have a infinite decreasing sequence of positive integers?
Very nice! | {"set_name": "stack_exchange", "score": 3, "question_id": 3048194} |
TITLE: Ratio of rolled numbers
QUESTION [2 upvotes]: Let's roll a die four times and let $X_{k}$ $\left(k=1,2,3,4\right)$ denotes the rolled number during the $k$th rolling. How can we calculate
$$\mathbb{E}\left(\frac{X_{k}}{\min\left\{ X_{1},X_{2},X_{3},X_{4}\right\} }\right)=?$$
I know the CDF of the denominator can be calculated as $F_{Y}\left(i\right)=\left(F_{X_{k}}\left(i\right)\right)^{4}$, where $Y\overset{\circ}{=}\min\left\{ X_{1},X_{2},X_{3},X_{4}\right\}$, so
$$\mathbb{P}\left(Y=i\right)=\left(\frac{i}{6}\right)^{4}-\left(\frac{i-1}{6}\right)^{4},\;\;\;i=1,2,\ldots,6.$$
(Here I define the CDF as a right continuous function.) I wanted to calculate $\mathbb{E}\left(\frac{X_{k}}{Y}\right)$ with the law of total expectation:
$$\mathbb{E}\left(\frac{X_{k}}{Y}\right)=\mathbb{E}\left(\mathbb{E}\left(\frac{X_{k}}{Y}\mid Y\right)\right)=\mathbb{E}\left(\frac{1}{Y}\mathbb{E}\left(X_{k}\mid Y\right)\right).$$
From this expression:
$$\mathbb{E}\left(X_{k}\mid Y=i\right)=\frac{1}{7-i}\sum_{i=j}^{6}j,\;\;\;i=1,2,\ldots,6.$$
So using everything above:
$$\mathbb{E}\left(\frac{X_{k}}{Y}\right)=\mathbb{E}\left(\frac{1}{Y}\mathbb{E}\left(X_{k}\mid Y\right)\right)=\sum_{i=1}^{6}\left[\left(\frac{1}{i}\cdot\frac{1}{7-i}\sum_{i=j}^{6}j\right)\left(\left(\frac{i}{6}\right)^{4}-\left(\frac{i-1}{6}\right)^{4}\right)\right],$$
which is approximately $1.008$. But it doesn't feel good. Anyway, I ran MC simulations where $\mathbb{E}\left(\frac{X_{k}}{Y}\right)\widetilde{=}2.35$. Where do I go wrong with my calculation?
REPLY [1 votes]: Your expression for $\ \mathbb{E}\big(X_k\,|\,Y=y\big)\ $ is incorrect. You appear to have assumed that given $\ Y=y\ $, $\ X_k\ $ is equally likely to take any of the values from $\ y\ $ to $\ 6\ $. Unfortunately, that isn't the case (except for $\ y=6\ $, when
$\ X_k\ $ could only have the value $6$). If $\ y<6\ $, then given $\ Y=y\ $, $\ X_k\ $ is equally likely to take any of the values from $\ y\color{red}{+1}\ $ to $6$, but more likely than that to have the value $\ y\ $ In fact, for $\ x>y\ $,
$$
\frac{P\big(X_k=x\,\big|\,Y=y\big)}{P\big(X_k=y\,\big|\,Y=y\big)}=1-\left(\frac{6-y}{7-y}\right)^3\ .
$$
The equations $\ X_k=Y=y\ $ hold if and only if $\ X_k=y\ $ and $\ \min_\limits{j\ne k}(X_j)\ge y\ $. Since $\ X_k\ $ and $\ \min_\limits{j\ne k}(X_j)\ $ are independent, this has probability
\begin{align}
P\big(X_k=y,Y=y\big)&=P\left(X_k= y,\min_\limits{j\ne k}(X_j)\ge y\right)\\
&=P\big(X_k=y\big)P\left(\min_\limits{j\ne k}(X_j)\ge y\right)\\
&=\frac{1}{6}\left(\frac{7-y}{6}\right)^3
\end{align}
Likewise, if $\ x>y\ $, then the equations $\ X_k=x, Y=y\ $ hold if and only if $\ X_k=x\ $ and $\ \min_\limits{j\ne k}(X_j)= y\ $. Therefore,
\begin{align}
P\big(X_k=x,Y=y\big)&=P\left(X_k= x,\min_\limits{j\ne k}(X_j)= y\right)\\
&=P\big(X_k=x\big)P\left(\min_\limits{j\ne k}(X_j)= y\right)\\
&=\frac{1}{6}\left(\left(\frac{7-y}{6}\right)^3-\left(\frac{6-y}{6}\right)^3\right)\ ,
\end{align}
and the joint probability mass function of $\ X_k\ $ and $\ Y\ $ is given by
\begin{align}
P\big(X_i=x,Y=y\big)=\cases{0&if $\ x<y$\\
\frac{1}{6}\left(\frac{7-y}{6}\right)^3&if $\ x=y$\\
\frac{1}{6}\left(\left(\frac{7-y}{6}\right)^3-\left(\frac{6-y}{6}\right)^3\right)&if $\ x>y$}
\end{align}
This gives
\begin{align}
\mathbb{E}\left(\frac{X_k}{Y}\right)&=\sum_{y=1}^6\sum_{x=1}^6\left(\frac{x}{y}\right)P\big(X_i=x,\,Y=y\big)\\
&=\sum_{y=1}^6\left(\frac{1}{6}\left(\frac{7-y}{6}\right)^3+\sum_{x=y+1}^6\frac{x}{6y}\left(\left(\frac{7-y}{6}\right)^3-\left(\frac{6-y}{6}\right)^3\right)\right)\\
&=\frac{20371}{8640}\\
&\approx2.35775\ ,
\end{align}
close to the value you obtained from your simulation. This coincides with the value obtained by the different method used in my earlier less complete answer retained below.
As it happens, in this case the law of total expectation isn't much help in evaluating $\ \mathbb{E}\left(\frac{X_k}{Y}\right)\ $. While you can do it that way, it requires you to calculate the joint probability mass function $\ P\big(X_k=x,Y=y\big)\ $ first, divide it by $\ P(Y=y)=$$\,\left(\frac{7-y}{6}\right)^4-\left(\frac{6-y}{6}\right)^4\ $ to obtain the conditional probability $\ P\big(X_k=x\,\big|\,Y=y\big)\ $, and then sum $\ xP\big(X_k=x\,\big|\,Y=y\big)\ $ over $\ x\ $ to obtain the conditional expectation $\ \mathbb{E}\big(X_k\,\big|\,Y=y\big)\ $. Now to get $\ \mathbb{E}\left(\frac{1}{Y}\mathbb{E}\big(X_{k}\big| Y\right)\big)\ $, you have to divide $\ \mathbb{E}\big(X_k\,\big|\,Y=y\big)\ $ by $\ y\ $ and multiply it by the factor $\ P(Y=y\,)\ $ for each $\ y\ $ and sum over $\ y\ $. Since the factor $\ P(Y=y\,)\ $ is the same one you'd originally divided
$\ P\big(X_k=x,Y=y\big)\ $ by, it's simpler to avoid this unnecessary division and multiplication by working directly with the joint probability mass function.
For what it's worth, the correct expression for $\ \mathbb{E}\big(X_k\,|\,Y=y\big)\ $ is
$$
F(y)^{-1}\left(\frac{y}{6}\left(\frac{7-y}{6}\right)^3+\sum_{x=y+1}^6\frac{x}{6}\left(\left(\frac{7-y}{6}\right)^3-\left(\frac{6-y}{6}\right)^3\right)\right)
$$
where $\ F(y)=P(Y=y)=$$\,\left(\frac{7-y}{6}\right)^4-\left(\frac{6-y}{6}\right)^4\ $.
Earlier less complete answer
If $\ Z_i=\frac{X_i}{\min_\limits{j=1,2,3,4}(X_j)}\ $, then $\ Z_i\ $ are identically distributed discrete random variables which can only assume one of the values $\ 1$,$\,\frac{6}{5}$,$\,\frac{5}{4}$,$\,\frac{4}{3}$,$\,\frac{3}{2}$,$\,\frac{5}{3}$,$\,2$,$\,\frac{5}{2}$,$\,3$,$\,4$,$\,5$, or $\ 6$. For $\ z=\frac{6}{5}$,$\,\frac{5}{4}$,$\,\frac{4}{3}$,$\,\frac{5}{3}$,$\,\frac{5}{2}$,$\,4$,$\,5$, or $\ 6 \ $, $\ Z_i\ $ can only assume the value $\ z\ $ if $\ X_i\ $ assumes the value of the numerator of the fraction and $\ \min_\limits{j\ne i}(X_j)\ $ assumes the value of the denominator:
\begin{align}
P\left(Z_i=\frac{p}{q}
\right)&=P\big(X_i=p,\min_\limits{j\ne i}(X_j)=q\big)\\
&=\frac{1}{6}\left(\left(\frac{7-q}{6}\right)^3-\left(\frac{6-q}{6}\right)^3\right)
\end{align}
For the other possible values of $\ Z_i\ $, we have:
\begin{align}
P\big(Z_i=1\big)&=\sum_{k=1}^6P\left(X_i=k,\,k\le\min_\limits{j\ne i}\big(X_j\big)\right)\\
&=\frac{1}{6}\sum_{k=1}^6\left(\frac{7-k}{6}\right)^3\\
&=\frac{1}{6}\sum_{i=1}^6\left(\frac{i}{6}\right)^3\\
P\left(Z_i=\frac{3}{2}\right)&=P\big(\big\{X_i=6,\min_\limits{j\ne i}\big(X_j\big)=4\big\}\cup\big\{X_i=3,\min_\limits{j\ne i}\big(X_j\big)=2\big\}\big)\\
&=\frac{1}{6}\left(\left(\frac{1}{2}\right)^3-\left(\frac{1}{3}\right)^3\right)+\frac{1}{6}\left(\left(\frac{5}{6}\right)^3-\left(\frac{2}{3}\right)^3\right)\\
P\big(Z_i=2\big)&=P\big(\big\{X_i=6,\min_\limits{j\ne i}\big(X_j\big)=3\big\}\\
&\hspace{3em}\cup\big\{X_i=4,\min_\limits{j\ne i}\big(X_j\big)=2\big\}\\
&\hspace{6em}\cup\big\{X_i=2,\min_\limits{j\ne i}\big(X_j\big)=1\big\}\big)\\
&=\frac{1}{6}\left(\left(\frac{2}{3}\right)^3-\left(\frac{1}{2}\right)^3\right)+\frac{1}{6}\left(\left(\frac{5}{6}\right)^3-\left(\frac{2}{3}\right)^3\right)+\frac{1}{6}\left(1-\left(\frac{5}{6}\right)^3\right)\\
&=\frac{1}{6}\left(1-\left(\frac{1}{2}\right)^3\right)\\
P\left(Z_i=3\right)&=P\big(\big\{X_i=6,\min_\limits{j\ne i}\big(X_j\big)=2\big\}\cup\big\{X_i=3,\min_\limits{j\ne i}\big(X_j\big)=1\big\}\big)\\
&=\frac{1}{6}\left(\left(\frac{5}{6}\right)^3-\left(\frac{2}{3}\right)^3\right)+\frac{1}{6}\left(1-\left(\frac{5}{6}\right)^3\right)\\
&=\frac{1}{6}\left(1-\left(\frac{2}{3}\right)^3\right)
\end{align}
Here are these values aranged in a table
\begin{array}{c|c|c|}
z&P(Z_i=z)&\text{value}\\
\hline
1&\frac{1}{6}\sum_\limits{i=1}^6\left(\frac{i}{6}\right)^3&\frac{49}{144}\\
\hline
\frac{6}{5}&\frac{1}{6}\left(\left(\frac{1}{3}\right)^3-\left(\frac{1}{6}\right)^3\right)&\frac{7}{1296}\\
\hline
\frac{5}{4}&\frac{1}{6}\left(\left(\frac{1}{2}\right)^3-\left(\frac{1}{3}\right)^3\right)&\frac{19}{1296}\\
\hline
\frac{4}{3}&\frac{1}{6}\left(\left(\frac{2}{3}\right)^3-\left(\frac{1}{2}\right)^3\right)&\frac{37}{1296}\\
\hline
\frac{3}{2}&\frac{1}{6}\left(\left(\frac{1}{2}\right)^3+\left(\frac{5}{6}\right)^3-\left(\frac{1}{3}\right)^3-\left(\frac{2}{3}\right)^3\right)&\frac{5}{81}\\
\hline
\frac{5}{3}&\frac{1}{6}\left(\left(\frac{2}{3}\right)^3-\left(\frac{1}{2}\right)^3\right)&\frac{37}{1296}\\
\hline
2&\frac{1}{6}\left(1-\left(\frac{1}{2}\right)^3\right)&\frac{7}{48}\\
\hline
\frac{5}{2}&
\frac{1}{6}\left(\left(\frac{5}{6}\right)^3-\left(\frac{2}{3}\right)^3\right)&\frac{61}{1296}\\
\hline
3&\frac{1}{6}\left(1-\left(\frac{2}{3}\right)^3\right)&\frac{19}{162}\\
\hline
4&\frac{1}{6}\left(1-\left(\frac{5}{6}\right)^3\right)&\frac{91}{1296}\\
\hline
5&\frac{1}{6}\left(1-\left(\frac{5}{6}\right)^3\right)&\frac{91}{1296}\\
\hline
6&\frac{1}{6}\left(1-\left(\frac{5}{6}\right)^3\right)&\frac{91}{1296}\\
\hline
\end{array}
Computing $\ \mathbb{E}\big(Z_i\big) $ from the figures in this table gives
\begin{align}
\mathbb{E}\big(Z_i\big)&=\frac{20371}{8640}\\
&\approx 2.35775\ ,
\end{align}
close to the value you obtained from your simulations. | {"set_name": "stack_exchange", "score": 2, "question_id": 4549916} |
\begin{document}
\title{Uniform bounds of Piltz divisor problem over number fields}
\author{Wataru Takeda}
\address{Department of Mathematics, Nagoya University, Chikusa-ku, Nagoya 464-8602,
Japan}
\email{d18002r@math.nagoya-u.ac.jp}
\subjclass[2010]{11N45 (primary),11R42, 11H06,11P21 (secondary)}
\keywords{ideal counting function, exponential sum, Piltz divisor problem}
\begin{abstract}
We consider the upper bound of Piltz divisor problem over number fields. Piltz divisor problem is known as a generalization of the Dirichlet divisor problem. We deal with this problem over number fields and improve the error term of this function for many cases. Our proof uses the estimate of exponential sums. We also show uniform results for ideal counting function and relatively $r$-prime lattice points as one of applications.
\end{abstract}
\maketitle
\section{Introduction}
The behavior of arithmetic functions has long been studied and it is one of the most important research in analytic number theory.
But many arithmetic functions $f(n)$ fluctuate as $n$ increases and it becomes difficult to deal with them. Thus many authors study partial sums $\sum_{n\le x}f(n)$ to obtain some information about arithmetic functions $f(n)$.
In this paper we consider Piltz divisor function $I_K^m(x)$ over number field.
Let $K$ be a number field with extension degree $[K:\mathbf Q]=n$ and let $\mathcal{O}_K$ be its ring of integers. Let $D_K$ be absolute value of the discriminant of $K$.
Then Piltz divisor function $I_K^m(x)$ counts the number of $m$-tuples of ideals $(\mathfrak{a}_1, \mathfrak{a}_2,\ldots,\mathfrak{a}_m)$ such that product of their ideal norm $\mathfrak{Na}_1\cdots\mathfrak{Na}_m\le x$.
It is known that \begin{equation} \label{res}I_K^m(x)\sim \underset{s=1}{Res}\ \left(\zeta_K(s)^m\frac {{x^s}}s\right).\end{equation}
We denote $\Delta_K^m(x)$ be the error term of $I_K^m(x)$, that is, $I_K^m(x)- \underset{s=1}{Res}\ \left(\zeta_K(s)^m \frac {{x^s}}s\right)$.
The case of $m=1$ this function is the ordinary ideal counting function over $K$. For simplicity we substitute $I_K(x)$ and $\Delta_K(x)$ for $I_K^1(x)$ and $\Delta_K^1(x)$ respectively.
There are many results about $I_K(x)$ from 1900's. In the case $K=\mathbf Q$, integer ideals of $\mathbf Z$ and positive integers are in one-to-one correspondence, so $I_{\mathbf Q}(x)=[x]$, where $[\cdot]$ is the Gauss symbol.
For the general case, the best estimate of $\Delta_K(x)$ hitherto is the following theorem:
\begin{theorem}
\label{idealhi}
The following estimates hold. For all $\varepsilon>0$
\begin{center}
\begin{tabular}{cll}
$n=[K:\mathbf Q]$&$\Delta_K(x)$&\rule[-2mm]{0mm}{6mm}\\
\hline
$2$&$O\left(x^{\frac{131}{416}}\left(\log x\right)^{\frac{18627}{8320}}\right)$&Huxley. \cite{Hu00}\rule[-2mm]{0mm}{6mm}\\
$3$&$O\left(x^{\frac{43}{96}+\varepsilon}\right)$&M\"uller. \cite{mu88}\rule[-2mm]{0mm}{6mm}\\
$4$&$O\left(x^{\frac{41}{72}+\varepsilon}\right)$&Bordell\`es. \cite{bo15}\rule[-2mm]{0mm}{6mm}\\
$5\le n\le10$&$O\left(x^{1-\frac4{2n+1}+\varepsilon}\right)$&Bordell\`es. \cite{bo15}\rule[-2mm]{0mm}{6mm}\\
$11\le n$&$O\left(x^{1-\frac3{n+6}+\varepsilon}\right)$&Lao. \cite{La10}\rule[-2mm]{0mm}{6mm}\\
\end{tabular}
\end{center}
\end{theorem}
There are also many results about $I_{\mathbf Q}^m$ from 1800's. In 1849 Dirichlet shows that \[I_{\mathbf Q}^2(x)=x\log x+(2\gamma-1)x+O\left(x^{\frac12}\right),\] where $\gamma$ is the Euler constant, defined by the equation\[\gamma=\lim_{n\rightarrow \infty}\left(\sum_{k=1}^n\frac1k-\log n\right).\]
The $O$-term is improved by many researchers many times, the best estimate hitherto is ${x^{\frac{517}{1648}+\varepsilon}}$ \cite{bw17}.
As we have mentioned above, there exists many results about other divisor problems but it seems that there are not many results about piltz divisor problem over number fields. In 1993, Nowak shows the following theorem:
\begin{theorem}[Nowak \cite{No93}]
\label{no}
When $n=[K:\mathbf{Q}]\ge2$, then we get
\[\Delta_K^m(x)=\left\{
\begin{array}{ll}
O_K\left(x^{1-\frac2{mn}+\frac8{mn(5mn+2)}} (\log x)^{m-1-\frac{10(m-2)}{5n+2}}\right)& \text{ for } 3\le mn\le6,\\
O_K\left(x^{1-\frac2{mn}+\frac3{2m^2n^2}} (\log x)^{m-1-\frac{2(m-2)}{mn}}\right)& \text{ for } mn\ge 7.
\end{array}
\right.\]
\end{theorem}
For the estimate of lower bound, Girstmair, K\"uhleitner, M\"uller and Nowak obtain the following $\Omega$-results:
\begin{theorem}[Girstmair, K\"uhleitner, M\"uller and Nowak \cite{gk05}]
For any fixed number field $K$ with $n=[K:\mathbf Q]\ge2$
\begin{equation}\label{omega}
\Delta_K^m(x)=\Omega\left(x^{\frac12-\frac{1}{2mn}}(\log x)^{\frac12-\frac{1}{2mn}}(\log\log x)^{\kappa}(\log\log\log x)^{-\lambda}\right),
\end{equation}
where $\kappa$ and $\lambda$ are constants depending on $K$. To be more precise, let $K^{gal}$ be the Galois closure of $K/\mathbf Q$, $G=Gal\left(K^{gal}/\mathbf{Q}\right)$ its Galois group and $H=Gal\left(K^{gal}/K\right)$ the subgroup of $G$ corresponding to $K$. Then \[\kappa=\frac{mn+1}{2mn}\left(\sum_{\nu=1}^{n}\delta_\nu\nu^{\frac{2mn}{mn+1}}-1\right)\text{ and } \lambda=\frac{mn+1}{4mn}R+\frac{mn-1}{2mn},\]
where \[\delta_\nu=\frac{|\{\tau\in G~|~|\{\sigma\in G~|~\tau\in\sigma H\sigma^{-1}\}|=\nu|H|\}|}{|G|} \]
and $R$ is the number of $1\le\nu\le n$ with $\delta_\nu>0$.
\end{theorem}
We know the following conditional result:
If we assume the Lindel\"of hypothsis for Dedekind zeta function, it holds that for all $\varepsilon>0$, for all $K$ and for all $m$
\begin{equation}
\label{lindelof}\Delta_K^m(x)=O_{\varepsilon}\left(x^{\frac12+\varepsilon}D_K^{\varepsilon}\right).\end{equation}
In this paper we estimate the error term of $\Delta_K^m(x)$ by using exponential sums. In \cite{No93} and \cite{gk05}, they use other approaches, so we expect new development for the Piltz divisor problem over number field. As a results, we improve the estimate of upper bound of $\Delta_K^m(x)$ for many $K$ and many $m$.
In Section 2, we show some auxiliary theorems to consider the upper bound of the error term $\Delta_K^m(x)$.
First we give a review of the convexity bound for the Dedekind zeta function and generalized Atkinson's Lemma \cite{at41}.
Next we show proposition \ref{ideal}, which reduces an ideal counting problem to an exponential sums problem. This proposition plays a crucial role in our computing $\Delta_K^m(x)$.
In Section 3, we prove the following theorem about the error term $\Delta_K^m(x)$ by using estimate of exponential sums.
\begin{theorem}
For every $\varepsilon>0$ the following estimates hold. When $mn\ge4$, then \[\Delta_K^m(x)=O_{n,m,\varepsilon}\left(x^{\frac{2mn-3}{2mn+1}+\varepsilon}D_K^{\,\frac{2m}{2mn+1}+\varepsilon}\right).\]
\end{theorem}
This theorem gives improvement of upper bound of $\Delta_K^m(x)$ for $mn\ge4$.
In Section 4, we give some application. First we give an uniform estimate for ideal counting function over number fields. Second we show a good uniform upper bound of the distribution of relatively $r$-prime lattice points over number fields as a corollary of the first application.
In Section 5, we consider a conjecture about estimates for Piltz divisor functions over number field. It is proposed that for all number fields $K$ and for all $m$ the best upper bound of the error term is better than that on the assumption of the Lindel\" of Hypothesis (\ref{lindelof}). If $mn\le3$ this conjecture holds, but the other cases it seems to be very difficult.
\section{Auxiliary Theorem}
In this section, we show some important lemmas for our argument. Let $s=\sigma+it$ and $n=[K:\mathbf{Q}]$. We use the convexity bound of Dedekind zeta function to obtain an upper bound of the error term of Piltz divisor function $\Delta_K^m(x)$.
It is well-known fact that Dedekind zeta function satisfies the following functional equation:
\begin{equation}
\label{fe}
\zeta_K(1-s)=D_K^{s-\frac 12}2^{n(1-s)}\pi^{-ns}\Gamma(s)^{n}\left(\cos\frac{\pi s}2\right)^{r_1+r_2}\left(\sin\frac{\pi s}2\right)^{r_2}\zeta_K(s),
\end{equation}
where $r_1$ is the number of real embeddings of $K$ and $r_2$ is the number of pairs of
complex embeddings,
The Phragmen-Lindel\"of principle and (\ref{fe}) give the well-known convexity bound of the Dedekind zeta function \cite{ra59}:
For any $\varepsilon>0$ and $n=[K:\mathbf Q]$
\begin{equation}
\label{convex}
\zeta_K(\sigma+it)=\left\{\begin{array}{ll}
O_{n,\varepsilon}\left(|t|^{\frac n2-n\sigma+\varepsilon}D_K^{\,\frac 12-\sigma+\varepsilon}\right)& \text{ if }\sigma\le0,\\
O_{n,\varepsilon}\left(|t|^{\frac{n(1-\sigma)}2+\varepsilon}D_K^{\,\frac{1-\sigma}2+\varepsilon}\right)&\text{ if }0\le\sigma\le1,\\
O_{n,\varepsilon}\left(|t|^{\varepsilon}D_K^{\varepsilon}\right)&\text{ if } 1\le\sigma
\end{array}\right.
\end{equation}
as $|t|^nD_K\rightarrow\infty$, where $K$ runs through number fields with $[K:\mathbf{Q}]=n$. In the previous papers, we also use this convexity bound (\ref{convex}) to estimate the distribution of ideals. In the following sections, we show some estimate for $\Delta_K^m(x)$ in the similar way to our previous papers.
Lemma \ref{gcs} states the growth of the product of Gamma function and trigonometric functions in the functional equation (\ref{fe}) of Dedekind zeta function.
\begin{lemma}
\label{gcs}
Let ${\tau}\in\{cos,sin\}$ and $n$ be a positive integer
\begin{align*}
&\frac{\Gamma(s)^{n}}{1-s}\left(\cos\frac{\pi s}2\right)^{r_1+r_2}\left(\sin\frac{\pi s}2\right)^{r_2}\\
=&\hspace{1mm}Cn^{-ns}\Gamma\left(ns-\frac{n+1}2\right){\tau}\left(\frac{n\pi s}2\right)+O_{n}\left(|t|^{-2+n\sigma-\frac n2}\right),
\end{align*}
where $C$ is a constant and $s=\sigma+it$.
\end{lemma}
\begin{proof}
This lemma is shown from the Stirling formula and estimate for trigonometric function.
\end{proof}
{Next we introduce the generalized Atkinson's lemma.} This lemma is quite useful for calculating integrals of the Dedekind zeta function.
\begin{lemma}[Atkinson \cite{at41}]
\label{atk}
Let $y>0,\ 1<A\le B$ \text{ and } ${\tau}\in\{\cos, \sin\}$, and we define
\[I=\frac1{2\pi i}\int_{A-iB}^{A+iB}\Gamma(s){\tau}\left(\frac{\pi s}2\right)y^{-s}\ ds.\]
If $y\le B$, then \[I={\tau}(y)+O\left(y^{-\frac12}\min\left(\left(\log \frac By\right)^{-1},B^{\frac12}\right)+y^{-A}B^{A-\frac12}+y^{-\frac12}\right).\]
If $y>B$, then \[I=O\left(y^{-A}\left(B^{A-\frac12}\min\left(\left(\log \frac yB\right)^{-1},B^{\frac12}\right)+A^{A-\frac12}\right)\right).\]
\end{lemma}
Finally we introduce the following lemma to reduce the ideal counting problem to an exponential sum problem.
\begin{lemma}[Bordell\`es \cite{bo15}]
\label{bo}
Let $1\le L\le R$ be a real number and ${f}$ be an arithmetical function satisfying ${f}(m)=O(m^\varepsilon)$, and let ${\mbox{\boldmath $e$}} (x)=exp(2\pi ix)$ and $F={f}*\mu$, where $*$ is the Dirichlet product symbol. For $a\in\mathbf R-\{1\}$, $b,x\in\mathbf R$ and for every $\varepsilon>0$ the following estimate holds.
\begin{align*}
&\sum_{m\le R}\frac{{f}(m)}{m^{a}}{\tau}\left(2\pi xm^{b}\right)\\
&=O_{n, \varepsilon}\left(\begin{array}{l}
L^{1-a}+R^{\varepsilon}\underset{L<S\le R}{\max}S^{-a}\times\\
\displaystyle \times\underset{S<S_1\le 2S}{\max}\underset{\substack{M,N\le S_1\\MN\asymp S}}{\max}\underset{\substack{M\le M_1\le 2M\\ N\le N_1\le 2N}}{\max}\left|\sum_{M<m\le M_1}F(m)\sum_{N<n\le N_1}{\mbox{\boldmath $e$}}\left(x(mn)^{b}\right)\right|
\end{array}\right).
\end{align*}
\end{lemma}
Next proposition plays a crucial role in our computing $I_K^m(x)$. We consider the distribution of ideals of $\mathcal{O}_K$, where $K$ runs through extensions with $[K:\mathbf Q]=n$ and some conditions. The detail of the conditions will be determined later, but they state the relation of the principal term and the error term.
\begin{proposition}
\label{ideal}
Let $F_K=I_K^m*\mu$.
For every $\varepsilon>0$ the following estimate holds.
\begin{align*}
&\Delta_K^m(x)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(\begin{array}{l}L^{1-\alpha}+x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{\varepsilon}\underset{L\le S\le R}{\max}S^{-\frac{mn+1}{2mn}}\times\\
\displaystyle \times\underset{S<S_1\le 2S}{\max}\underset{\substack{M,N\le S_1\\MN\asymp S}}{\max}\underset{\substack{M\le M_1\le 2M\\ N\le N_1\le 2N}}{\max}\left|\sum_{M<l\le M_1}F_K(m)\sum_{N<k\le N_1}{\mbox{\boldmath $e$}}\left(mn\left(\frac{xlk}{D_K}\right)^{\frac1{mn}}\right)\right|\\
+x^{\frac{mn-2}{2mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{\frac{mn-2}{2mn}+\varepsilon}+x^{\frac{mn-1}{mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{-\frac1{mn}+\varepsilon}
\end{array}\right).
\end{align*}
where $K$ runs through number fields with $[K:\mathbf Q]=n$ and some conditions.
\end{proposition}
\begin{proof}
Let $d_K^m(l)$ be the number of $m$-tuples of ideals $(\mathfrak{a}_1, \mathfrak{a}_2,\ldots,\mathfrak{a}_m)$ such that product of their ideal norm $\mathfrak{Na}_1\cdots\mathfrak{Na}_m=l$.
Then one can check it easily that \begin{equation}\label{zeta}
\zeta_K(s)^m=\sum_{l=1}^{\infty}\frac{d_K^m(l)}{l^s}\ \text{ for } \Re s>1
\end{equation}
and
\[I_K^m(x)=\sum_{l\le x}d_K^m(l).\]
Thus Perron's formula plays a crucial role in this proof.
We consider the integral \[\frac1{2\pi i}\int_{C}\zeta_K(s)^m\frac{x^s}s\ ds,\]
where $C$ is the contour $C_1\cup C_2\cup C_3\cup C_4$ shown in the following Figure \ref{path2}.
\setlength\unitlength{1truecm}
\begin{figure}[h]
\begin{center}
\begin{picture}(4.5,4.5)(0,0)
\small
\put(-1,2){\vector(1,0){4}}
\put(0,0){\vector(0,1){4}}
\put(-0.2,0.5){\vector(1,0){1.1}}
\put(0.9,0,5){\line(1,0){1.1}}
\put(2,0.5){\vector(0,1){2}}
\put(2,2.4){\line(0,1){1.1}}
\put(2,3.5){\vector(-1,0){1.1}}
\put(0.9,3,5){\line(-1,0){1.1}}
\put(0.2,2.1){O}
\put(3.1,2){$\Re(s)$}
\put(-0.3,4.1){$\Im(s)$}
\put(0,2){\circle*{0.1}}
\put(2,3.5){\line(-1,0){0.5}}
\put(-0.2,3.5){\vector(0,-1){1.1}}
\put(-0.2,2.5){\line(0,-1){2}}
\put(-0.1,3.5){\line(1,0){0.2}}
\put(-0.5,3.5){$iT$}
\put(-0.1,0.5){\line(1,0){0.2}}
\put(-0.8,0.3){$-iT$}
\put(2,2.9){\line(0,1){0.2}}
\put(-0.7,2.1){$-\varepsilon$}
\put(2.2,2.2){$1+\varepsilon$}
\put(1.3,3.6){$C_2$}
\put(2.1,1.7){$C_1$}
\put(1.3,0.1){$C_4$}
\put(0.3,1.7){$C_3$}
\end{picture}
\end{center}
\caption{\label{path2}}\end{figure}
In a way similar to the well-known proof of Perron's formula, we estimate
\begin{equation}\label{perron}\frac1{2\pi i}\int_{C_1}\zeta_K(s)^m\frac{x^s}s\ ds=I_K^m(x)+O_{\varepsilon}\left(\frac{x^{1+\varepsilon}}{T}\right).\end{equation}
We can select the large $T$, so that the $O$-term in the right hand side is sufficiently small. For estimating the left hand side by using estimate (\ref{convex}), we divide it into the integrals over $C_2, C_3$ and $C_4$.
First we consider the integrals over $C_2$ and $C_4$ as
\begin{align*}
&\left|\frac1{2\pi i}\int_{C_2\cup C_4}\zeta_K(s)^m\frac{x^s}s\ ds\right|\\
\le&\frac1{2\pi}\int^{1+\varepsilon}_{-\varepsilon}\left|\zeta_K\left(\sigma+iT\right)\right|^m\frac{x^{\sigma}}{T}\ d\sigma+\frac1{2\pi}\int^{1+\varepsilon}_{-\varepsilon}\left|\zeta_K\left(\sigma-iT\right)\right|^m\frac{x^{\sigma}}{T}\ d\sigma.\\
\end{align*}
It holds by the convexity bound of Dedekind zeta function (\ref{convex}) that their sum is estimated as
\begin{align}
\label{24}
\left|\frac1{2\pi i}\int_{C_2\cup C_4}\zeta_K(s)^m\frac{x^s}s\ ds\right|&=O_{n,m,\varepsilon}\left(\int^{1+\varepsilon}_{-\varepsilon}(T^{mn}D_K^{\,m})^{\frac{1-\sigma}2+\varepsilon}\frac{x^{\sigma}}{T}\ d\sigma\right)\nonumber\\[-3mm]
&{}\\[-3mm]
&=O_{n,m,\varepsilon}\left(\frac{x^{1+\varepsilon}D_K^{\,\varepsilon}}{T^{1-\varepsilon}}+T^{\frac {mn}2-1+\varepsilon}D_K^{\,\frac m2+\varepsilon}x^{-\varepsilon}\right).\nonumber
\end{align}
By the Cauchy residue theorem, (\ref{perron}) and (\ref{24}) we obtain
\begin{equation}
\label{c3}
\Delta_K^m(x)=\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds+O_{n,m,\varepsilon}\left(\frac{x^{1+\varepsilon}D_K^{\,\varepsilon}}{T^{1-\varepsilon}}+T^{\frac {mn}2-1+\varepsilon}D_K^{\,\frac m2+\varepsilon}x^{-\varepsilon}\right).
\end{equation}
Thus it suffices to consider the integral over $C_3$ as
\begin{align*}
\frac1{2\pi i}\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds&=\frac 1{2\pi i}\int_{-\varepsilon-iT}^{-\varepsilon+iT}\zeta_K(s)^m\frac{x^{s}}{s}\ ds.\\
\intertext{Changing the variable $s$ to $1-s$, we have}
\frac1{2\pi i}\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds&=\frac 1{2\pi i}\int_{1+\varepsilon-iT}^{1+\varepsilon+iT}\zeta_K(1-s)^m\frac{x^{1-s}}{1-s}\ ds.\\
\end{align*}
From this functional equation (\ref{fe}), it holds that
\begin{align*}
&\frac1{2\pi i}\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds\\
=&\hspace{1mm}\frac 1{2\pi i}\int_{1+\varepsilon-iT}^{1+\varepsilon+iT}\left(D_K^{\,s-\frac12}2^{n(1-s)}\pi^{-ns}\Gamma(s)^{n}\left(\cos\frac{\pi s}2\right)^{r_1+r_2}\left(\sin\frac{\pi s}2\right)^{r_2}\zeta_K(s)\right)^m\frac{x^{1-s}}{1-s}\ ds.\\
\intertext{By lemma \ref{gcs} the integral over $C_3$ can be expressed as}
&\frac1{2\pi i}\int_{C_3}\zeta_K(s)\frac{x^s}s\ ds\\
=&\hspace{1mm}\frac {Cx}{2\pi i}\int_{1+\varepsilon-iT}^{1+\varepsilon+iT}D_K^{\,-\frac m2}\left(\frac{(2n)^{mn}\pi^{mn}x}{D_K^m}\right)^{-s}\Gamma\left(mns-\frac{mn+1}2\right){\tau}\left(\frac{mn\pi s}2\right)\zeta_K(s)\ ds\\
&+O_{n,m,\varepsilon}\left(D_K^{\,\frac m2+\varepsilon}T^{\frac {mn}2-1+\varepsilon}x^{-\varepsilon}\right).\\
\intertext{Changing the variable $mns-\frac{mn+1}2$ to $s$, we have}
&\frac1{2\pi i}\int_{C_3}\zeta_K(s)\frac{x^s}s\ ds\\
=&\hspace{1mm}\frac {Cx^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}}{2\pi i}\int_{\frac{mn-1}2+mn\varepsilon-mniT}^{\frac{mn-1}2+mn\varepsilon+mniT}\left(2mn\pi \left(\frac x{D_K^m}\right)^{\frac1{mn}}\right)^{-s}\Gamma(s){\tau}\left(\frac{\pi s}2+\frac{(mn+1)\pi}4\right)\\
&\times\zeta_K\left(\frac s{mn}+\frac{mn+1}{2mn}\right)\ ds+O_{n,m,\varepsilon}\left(D_K^{\,\frac m2+\varepsilon}T^{\frac {mn}2-1+\varepsilon}x^{-\varepsilon}\right).\\
\intertext{From (\ref{zeta}) the function $\zeta_K(s)^m$ can be expressed as a Dirichlet series. It is absolutely and uniformly convergent on compact subsets on $\Re(s)>1$. Therefore we can interchange the order of summation and integral. Thus we obtain}
&\int\left(2mn\pi \left(\frac x{D_K^m}\right)^{\frac1{mn}}\right)^{-s}\Gamma(s){\tau}\left(\frac{\pi s}2+\frac{(mn+1)\pi}4\right)\zeta_K\left(\frac s{mn}+\frac{mn+1}{2mn}\right)\ ds\\
=&\hspace{1mm}\sum_{l=1}^{\infty}\frac{d_K^m(l)}{l^{\frac{mn+1}{2mn}}}\int\left(2mn\pi \left(\frac {lx}{D_K^m}\right)^{\frac1{mn}}\right)^{-s}\Gamma(s){\tau}\left(\frac{\pi s}2+\frac{(mn+1)\pi}4\right)\ ds,\\
\intertext{where the integration is on the vertical line from $\frac{mn-1}2+mn\varepsilon-mniT$ to $\frac{mn-1}2+mn\varepsilon+mniT$. Properties of trigonometric function lead to \[{\tau}\left(\frac{\pi s}2+\frac{(mn+1)\pi}4\right)=\pm\left\{\begin{array}{ll}
{\tau}\left(\frac{\pi s}2\right)&\text{ if } mn \text{ is odd},\\
\frac1{\sqrt2}\left({\tau}\left(\frac{\pi s}2\right)\pm {\tau_1}\left(\frac{\pi s}2\right)\right)&\text{ if } mn \text{ is even,}
\end{array}
\right.\]
where $\{{\tau},{\tau_1}\}=\{\sin,\cos\}$. Hence it holds that}
&\frac1{2\pi i}\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds\\
=&\hspace{1mm}\frac {Cx^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}}{2\pi i}\sum_{l=1}^{\infty}\frac{d_K^m(l)}{l^{\frac{mn+1}{2mn}}}\int_{\frac{mn-1}2+mn\varepsilon-mniT}^{\frac{mn-1}2+mn\varepsilon+mniT}\left(2mn\pi \left(\frac {lx}{D_K^m}\right)^{\frac1{mn}}\right)^{-s}\Gamma(s){\tau}\left(\frac{\pi s}2\right)\ ds\\
&+O_{n,m,\varepsilon}\left(D_K^{\,\frac m2+\varepsilon}T^{\frac {mn}2-1+\varepsilon}x^{-\varepsilon}\right).
\end{align*}
Now we apply lemma \ref{atk} to this integral with $y=2mn\pi\left(\frac{lx}{D_K^m}\right)^{\frac1{mn}},\ A=\frac{mn-1}2+mn\varepsilon,\ B=mnT \text{ and }T=2\pi\left(\frac{xR}{D_K^m}\right)^{\frac1{mn}}$, this becomes
\begin{align*}
&\frac1{2\pi i}\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds\\
=&\hspace{1mm}\frac {Cx^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}}{2\pi i}\sum_{l\le R}\frac{d^m_K(l)}{l^{\frac{mn+1}{2mn}}}{\tau}\left(2mn\pi \left(\frac {lx}{D_K^m}\right)^{\frac1{mn}}\right)\\
&+O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\min\left\{\left(\log \frac Rl\right)^{-1},\ \left(\frac{Rx}{D_K^m}\right)^{\frac1{2mn}}\right\}\right)\\
&+O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\left(\left(\frac Rl\right)^{\frac{mn-2}{2mn}}+1\right)\right)\\
&+O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\sum_{l> R}\frac{d_K^m(l)}{l^{1+\varepsilon}}\min\left\{\left(\log \frac lR\right)^{-1},\ \left(\frac{Rx}{D_K^m}\right)^{\frac1{2mn}}\right\}\right)\\
&+O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{\frac{mn-2}{2mn}+\varepsilon}\right).
\end{align*}
We evaluate three $O$-terms as follows.
\begin{align*}
\intertext{First we consider the first $O$-term. One can estimate $\left(\log \frac Rl\right)^{-1}=O\left(\frac R{R-l}\right)$, so we obtain}
&O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\min\left\{\left(\log \frac Rl\right)^{-1},\ \left(\frac{Rx}{D_K^m}\right)^{\frac1{2mn}}\right\}\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le [R]-1}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\left(\log \frac Rl\right)^{-1}+x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{[R]\le l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\left(\frac{Rx}{D_K^m}\right)^{\frac1{2mn}}\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le [R]-1}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\frac R{R-l}+x^{\frac{mn-1}{2mn}}D_K^{\,\frac{1}{2n}}R^{\frac{1}{2mn}}\sum_{[R]\le l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}+x^{\frac{mn-1}{2mn}}D_K^{\,\frac{1}{2n}}R^{-\frac{mn+1}{2mn}}\right).
\end{align*}
\begin{align*}
\intertext{Next we calculate the second $O$-term.}
O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\left(\left(\frac Rl\right)^{\frac{mn-2}{2mn}}+1\right)\right)=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}}\sum_{l\le R}\frac{d_K^m(l)}{l}\right).\\
\intertext{Since it is well-known that $d_K^m(l)=O(l^\varepsilon)$, we get}
O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+2}{2mn}}}\left(\left(\frac Rl\right)^{\frac{mn-2}{2mn}}+1\right)\right)=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}}\int_1^R\frac{t^\varepsilon}{t}\ dt\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\right).
\end{align*}
\begin{align*}
\intertext{Finally we estimate the third $O$-term in a similar way to calculate the first $O$-term. One can estimate $\left(\log \frac lR\right)^{-1}=O\left(\frac R{l-R}\right)$, so we obtain}
&O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\sum_{l> R}\frac{d_K^m(l)}{l^{1+\varepsilon}}\min\left\{\left(\log \frac lR\right)^{-1},\ \left(\frac{Rx}{D_K^m}\right)^{\frac1{2mn}}\right\}\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\left(\sum_{R<l\le[R]+1}\frac{d_K^m(l)}{l^{1+\varepsilon}}\left(\frac{Rx}{D_K^m}\right)^{\frac1{2mn}}+\sum_{[R]+2\le l}\frac{d_K^m(l)}{l^{1+\varepsilon}}\left(\log \frac lR\right)^{-1}\right)\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{\frac{mn-1}{2mn}+\varepsilon}\sum_{R<l\le[R]+1}\frac{d_K^m(l)}{l^{1+\varepsilon}}+x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\sum_{[R]+2\le l}\frac{d_K^m(l)}{l^{1+\varepsilon}}\frac R{l-R}\right)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{-\frac{mn+1}{2mn}+\varepsilon}+x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\right).
\end{align*}
From above results, we obtain
\begin{align}
\label{c32}
\frac1{2\pi i}\int_{C_3}\zeta_K(s)^m\frac{x^s}s\ ds=&\hspace{1mm}\frac {Cx^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}}{2\pi i}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+1}{2mn}}}{\tau}\left(2n\pi \left(\frac {lx}{D_K^m}\right)^{\frac1{mn}}\right)\nonumber\\[-3mm]
&{}\\[-3mm]
&+O_{n,m, \varepsilon}\left(x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{-\frac{mn+1}{2mn}+\varepsilon}+x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\right).\nonumber
\end{align}
\begin{align*}
\intertext{From estimate (\ref{c3}) and (\ref{c32}), it is obtained that}
\Delta_K^m(x)=&\hspace{1mm}\frac {Cx^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}}{2\pi i}\sum_{l\le R}\frac{d_K^m(l)}{l^{\frac{mn+1}{2mn}}}{\tau}\left(2mn\pi \left(\frac {lx}{D_K^m}\right)^{\frac1{mn}}\right)\\
&+O_{n,m,\varepsilon}\left(x^{\frac{mn-2}{2mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{\frac{mn-2}{2mn}+\varepsilon}+x^{\frac{mn-1}{mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{-\frac1{mn}+\varepsilon}\right).\\
\end{align*}
Next we consider the above sum. Let $F_K=d^m_K*\mu$, where $*$ is the Dirichlet product symbol. From lemma \ref{bo} this becomes
\begin{align*}
&\Delta_K^m(x)\\
=&\hspace{1mm}O_{n,m, \varepsilon}\left(\begin{array}{l}L^{1-\alpha}+x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{\varepsilon}\underset{L\le S\le R}{\max}S^{-\frac{mn+1}{2mn}}\times\\
\displaystyle \times\underset{S<S_1\le 2S}{\max}\underset{\substack{M,N\le S_1\\MN\asymp S}}{\max}\underset{\substack{M\le M_1\le 2M\\ N\le N_1\le 2N}}{\max}\left|\sum_{M<l\le M_1}F_K(l)\sum_{N<k\le N_1}{\mbox{\boldmath $e$}}\left(mn\left(\frac{xlk}{D_K^m}\right)^{\frac1{mn}}\right)\right|\\
+x^{\frac{mn-2}{2mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{\frac{mn-2}{2mn}+\varepsilon}+x^{\frac{mn-1}{mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{-\frac1{mn}+\varepsilon}
\end{array}\right).
\end{align*}
This proves this proposition.
\end{proof}
Let $\mathcal S_K(x,S)$ be the sum in the $O$-term, that is, \[\underset{L\le S\le R}{\max}S^{-\frac{mn+1}{2mn}}\underset{S<S_1\le 2S}{\max}\underset{\substack{M,N\le S_1\\MN\asymp S}}{\max}\underset{\substack{M\le M_1\le 2M\\ N\le N_1\le 2N}}{\max}\left|\sum_{M<l\le M_1}F_K(l)\sum_{N<k\le N_1}{\mbox{\boldmath $e$}}\left(mn\left(\frac{xlk}{D_K^m}\right)^{\frac1{mn}}\right)\right|.\]
This proposition reduces the initial problem to an exponential sums problem. There are many results to estimate exponential sum. In the next section, we estimate Piltz divisor function by using some results for exponential sum established by many authors.
\section{Estimate of counting function}
In the last section, we show that the error term of Piltz divisor function $\Delta_K^m(x)$ can be expressed as a exponential sum. Let $X > 1$ be a real number, $1\le M < M_1 \le 2M$ and $1\le N<N_1\le 2N$ be integers and $(a_m), (b_n) \subset \mathbf C$ be sequence of complex numbers, and let $\alpha,\beta\in \mathbf R$ and we define
\begin{equation}\label{expo}
\mathcal S=\sum_{M<m\le M_1}a_m\sum_{N<n\le N_1}b_n\mbox{\boldmath $e$}\left(X\left(\frac mM\right)^{\alpha}\left(\frac nN\right)^{\beta}\right).
\end{equation}
In 1998 Wu shows this lemma.
\begin{lemma}[Wu \cite{wu98}]
\label{54}
Let $\alpha,\beta\in \mathbf R$ such that $\alpha\beta(\alpha-1)(\beta-1)\not=0$, and $|a_m|\le 1$ and $|b_n| \le 1$ and $\mathcal L=log(XMN + 2)$. Then
\[\mathcal L^{-2}\mathcal S=\hspace{1mm}O\left(\begin{array}{l}
(XM^3N^4)^{\frac15}+(X^4M^{10}N^{11})^{\frac1{16}}+(XM^7N^{10})^{\frac1{11}}\\
+MN^{\frac12}+(X^{-1}M^{14}N^{23})^{\frac1{22}} + X^{-\frac12}MN
\end{array}\right).
\]
\end{lemma}
Next Bordell\`es also shows this lemma by using estimate for triple exponential sums by Robert and Sargos.
\begin{lemma}[Bordell\`es \cite{bo15}]
\label{55}
Let $\alpha,\beta\in \mathbf R$ such that $\alpha\beta(\alpha-1)(\beta-1)\not=0$, and $|a_m|\le 1$ and $|b_n| \le 1$. If $X=O(M)$ then
\begin{align*}
&(MN)^{-\varepsilon}\mathcal S\\
=&\hspace{1mm}O\left((XM^5N^7)^{\frac18}+N(X^{-2}M^{11})^{\frac1{12}}+(X^{-3}M^{21}N^{23})^{\frac1{24}}+M^{\frac34}N + X^{-\frac14}MN\right).
\end{align*}
\end{lemma}
The following {{Srinivasan}}'s result is important for our estimating $\Delta_K^m(x)$.
\begin{lemma}[{{Srinivasan}} \cite{sr62}]
\label{srr}
Let $N$ and $P$ be positive integers and $u_n\ge0$, $v_p>0$, $A_n$ and $B_p$ denote constants for $1\le n\le N$ and $1\le p\le P$. Then there exists $q$ with properties \[Q_1\le q\le Q_2\]
and \[\sum_{n=1}^NA_nq^{u_n}+\sum_{p=1}^PB_pq^{-v_p}=O\left(\sum_{n=1}^N\sum_{p=1}^P\sqrt[u_n+v_p]{A_n^{v_p}B_p^{u_n}}+\sum_{n=1}^NA_nQ_1^{u_n}+\sum_{p=1}^PB_pQ_2^{-v_p}\right).\]
The constant involved in $O$-symbol is less than $N+P$.
\end{lemma}
{{Srinivasan}} remarks that the inequality in lemma \ref{srr} corresponds to the `best possible' choice of $q$ in the range $Q_1\le q\le Q_2$ \cite{sr62}. We apply lemma \ref{srr} to improve the error term $\Delta_K^m(x)$.
\begin{theorem}
\label{cub}
For every $\varepsilon>0$ the following estimates hold.
When $mn\ge4$, then \[\Delta_K^m(x)=O_{n,m,\varepsilon}\left(x^{\frac{2mn-3}{2mn+1}+\varepsilon}D_K^{\,\frac{2m}{2mn+1}+\varepsilon}\right)\]
as $x$ tends to infinity.
\end{theorem}
\begin{proof}
We note that
\begin{align*}
&\left|\sum_{M<l\le M_1}F_K(l)\sum_{N<k\le N_1}{\mbox{\boldmath $e$}}\left(mn\left(\frac{xlk}{D_K^m}\right)^{\frac1{mn}}\right)\right|\\
=&\hspace{1mm}\left|\sum_{M<l\le M_1}F_K(l)\sum_{N<k\le N_1}{\mbox{\boldmath $e$}}\left(mn\left(\frac{xMN}{D_K^m}\right)^{\frac1{mn}}\left(\frac lM\right)^{\frac1{mn}}\left(\frac kN\right)^{\frac1{mn}}\right)\right|.
\end{align*}
We use the above lemmas with $X=mn\left(\frac{xMN}{D_K^m}\right)^{\frac1{mn}}>0$.
Let $0\le\alpha\le\frac13$, we consider four cases:
\begin{center}
\begin{tabular}{cl}
\\
\hline
Case 1. &$S^{\alpha}\ll N\ll S^{\frac12}$\\
Case 2. &$S^{\frac12}\ll N\ll S^{1-\alpha}$\\
Case 3. &$S^{1-\alpha}\ll N$\\
Case 4. &$N\ll S^{\alpha}$
\end{tabular}
\end{center}
When $S^{\alpha}\ll N\ll S^{\frac12}$, we apply lemma \ref{54} and this gives
\begin{align}\label{case1}
&S^{-\varepsilon}x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}\mathcal S_K(x,S)\nonumber\\[-3mm]
&{}\\[-3mm]
=&\hspace{1mm}O_{n,m, \varepsilon}\left(\begin{array}{l}
x^{\frac{5mn-3}{10mn}}D_K^{\,\frac3{10n}}R^{\frac{2mn-3}{10mn}}+x^{\frac{2mn-1}{4mn}}D_K^{\,\frac1{4n}}R^{\frac{5mn-8}{32mn}}\\
+x^{\frac{11mn-9}{22mn}}D_K^{\,\frac9{22n}}R^{\frac{6mn-9}{22mn}}+x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{\frac{mn-1}{2mn}-\frac12\alpha}\\
+x^{\frac{11mn-12}{22mn}}D_K^{\,\frac6{11n}}R^{\frac{15mn-24}{44mn}}+x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}}
\end{array}\right).\nonumber
\end{align}
When $S^{\frac12}\ll N\ll S^{1-\alpha}$ we use lemma \ref{54} again reversing the role of $M$ and $N$. We obtain the same estimate for the case that $S^{\alpha}\ll N\ll S^{\frac12}$.
\noindent
For the case 3, we use lemma \ref{55}
\begin{align}
\label{case3}
&S^{-\varepsilon}x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}\mathcal S_K(x,S)\nonumber\\[-3mm]
&{}\\[-3mm]
=&\hspace{1mm}O_{n,m, \varepsilon}\left(\begin{array}{l}
x^{\frac{4mn-3}{8mn}}D_K^{\,\frac3{8n}}R^{\frac{mn-3}{8mn}+\frac14\alpha}+x^{\frac{3mn-4}{6mn}}D_K^{\,\frac2{3n}}R^{\frac{5mn-8}{12mn}+\frac1{12}\alpha}\\
+x^{\frac{4mn-5}{8mn}}D_K^{\,\frac5{8n}}R^{\frac{3mn-5}{8mn}-\frac1{12}\alpha}\\
+x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{\frac{mn-2}{4mn}+\frac14\alpha}+x^{\frac{2mn-3}{4mn}}D_K^{\,\frac3{4n}}R^{\frac{2mn-3}{4mn}}
\end{array}\right).\nonumber
\end{align}
If $x^{\frac1{mn(1-\alpha)-1}}D_K^{-\frac m{mn(1-\alpha)-1}}\ll S$, the condition of Lemma \ref{55} $X=O(N)$ is satisfied. Therefore it suffices to choose $L=x^{\frac1{mn(1-\alpha)-1}}D_K^{-\frac m{mn(1-\alpha)-1}}$.
For the case 4, we use Lemma \ref{55} again reversing the role of $M$ and $N$. We obtain the same estimate for the case that $N\ll S^{\alpha}$.
Combining (\ref{case1}) and (\ref{case3}) with proposition \ref{ideal}, we obtain
\begin{equation}
\label{ue}
\Delta_K^m(x)=O_{n,m, \varepsilon}\left(\begin{array}{l}
x^{\frac{5mn-3}{10mn}}D_K^{\,\frac3{10n}}R^{\frac{2mn-3}{10mn}+\varepsilon}+x^{\frac{2mn-1}{4mn}}D_K^{\,\frac1{4n}}R^{\frac{5mn-8}{32mn}+\varepsilon}\\
+x^{\frac{11mn-9}{22mn}}D_K^{\,\frac9{22n}}R^{\frac{6mn-9}{22mn}+\varepsilon}+x^{\frac{mn-1}{2mn}}D_K^{\,\frac1{2n}}R^{\frac{mn-1}{2mn}-\frac12\alpha+\varepsilon}\\
+x^{\frac{11mn-12}{22mn}}D_K^{\,\frac6{11n}}R^{\frac{15mn-24}{44mn}+\varepsilon}+x^{\frac{mn-2}{2mn}}D_K^{\,\frac1{n}}R^{\frac{mn-2}{2mn}+\varepsilon}\\
+x^{\frac{4mn-3}{8mn}}D_K^{\,\frac3{8n}}R^{\frac{mn-3}{8mn}+\frac14\alpha+\varepsilon}+x^{\frac{3mn-4}{6mn}}D_K^{\,\frac2{3n}}R^{\frac{5mn-8}{12mn}+\frac1{12}\alpha+\varepsilon}\\
+x^{\frac{4mn-5}{8mn}}D_K^{\,\frac5{8n}}R^{\frac{3mn-5}{8mn}+\frac1{12}\alpha+\varepsilon}+x^{\frac{2mn-3}{4mn}}D_K^{\,\frac3{4n}}R^{\frac{2mn-3}{4mn}+\varepsilon}\\
+x^{\frac{mn-1}{mn}+\varepsilon}D_K^{\,\frac1n+\varepsilon}R^{-\frac1{mn}+\varepsilon}+x^{\frac{1-\alpha}{mn(1-\alpha)-1}}D_K^{-\frac {m(1-\alpha)}{mn(1-\alpha)-1}}
\end{array}\right).
\end{equation}
By lemma \ref{srr} with $x^{\frac1{mn(1-\alpha)-1}}D_K^{-\frac m{mn(1-\alpha)-1}}\le R\le xD$ there exists $R$ such that the error term of estimate (\ref{ue}) is much less than
\[\begin{array}{l}
x^{\frac{2mn}{2mn+7}+\varepsilon}D_K^{\,\frac{2m}{2mn+7}+\varepsilon}+x^{\frac{5mn+3}{5mn+24}+\varepsilon}D_K^{\,\frac{5m}{5mn+24}+\varepsilon}+x^{\frac{6mn-4}{6mn+13}+\varepsilon}D_K^{\,\frac{6m}{6mn+13}+\varepsilon}\\
+x^{\frac{(1-\alpha)mn+\alpha-1}{(1-\alpha)mn+1}+\varepsilon}D_K^{\,\frac{(1-\alpha)m}{(1-\alpha)mn+1}+\varepsilon}
+x^{\frac{15mn-17}{15mn+20}+\varepsilon}D_K^{\,\frac{3m}{3mn+4}+\varepsilon}+x^{\frac{mn-2}{mn}+\varepsilon}D_K^{\,\frac {1}{n}+\varepsilon}\\+x^{\frac{(2\alpha+1)mn-2\alpha}{(2\alpha+1)mn+5}+\varepsilon}D_K^{\,\frac{(2\alpha+1)m}{(2\alpha+1)mn+5}+\varepsilon}+x^{\frac{(\alpha+5)mn-\alpha-7}{(\alpha+5)mn+4}+\varepsilon}D_K^{\,\frac{(\alpha+5)m}{(\alpha+5)mn+4}+\varepsilon}\\
+x^{\frac{(2\alpha+9)mn-2\alpha-12}{(2\alpha+9)mn+9}+\varepsilon}D_K^{\,\frac{(2\alpha+9)m}{(2\alpha+9)mn+9}+\varepsilon}+x^{\frac{2mn-3}{2mn+1}+\varepsilon}D_K^{\,\frac{2m}{2mn+1}+\varepsilon}\\
+x^{\frac{5mn(1-\alpha)-6+3\alpha}{10mn(1-\alpha)-10}+\varepsilon}D_K^{\,\frac{m-3m\alpha}{10mn(1-\alpha)-10}+\varepsilon}+x^{\frac{16mn(1-\alpha)-19+8\alpha}{32mn(1-\alpha)-32}+\varepsilon}D_K^{\,\frac{3m-8m\alpha}{32mn(1-\alpha)-32}+\varepsilon}\\
+x^{\frac{11mn(1-\alpha)-14+9\alpha}{22mn(1-\alpha)-22}+\varepsilon}D_K^{\,\frac{3m-9m\alpha}{22mn(1-\alpha)-22}+\varepsilon}+x^{\frac{1}{2}+\varepsilon}\\
+x^{\frac{22mn(1-\alpha)-31+24\alpha}{44mn(1-\alpha)-44}+\varepsilon}D_K^{\,\frac{9m-24m\alpha}{44mn(1-\alpha)-44}+\varepsilon}+x^{\frac{mn(1-\alpha)-2+2\alpha}{2mn(1-\alpha)-2}+\varepsilon}D_K^{\,\frac{m-2m\alpha}{2mn(1-\alpha)-2}+\varepsilon}\\
+x^{\frac{4mn(1-\alpha)-6+5\alpha}{8mn(1-\alpha)-8}+\varepsilon}D_K^{\,\frac{2m-5m\alpha}{8mn(1-\alpha)-8}+\varepsilon}+x^{\frac{6mn(1-\alpha)-9+9\alpha}{12mn(1-\alpha)-12}+\varepsilon}D_K^{\,\frac{3m-9m\alpha}{12mn(1-\alpha)-12}+\varepsilon}\\
+x^{\frac{12mn(1-\alpha)-18+17\alpha}{24mn(1-\alpha)-24}+\varepsilon}D_K^{\,\frac{6m-17m\alpha}{24mn(1-\alpha)-24}+\varepsilon}+x^{\frac{2mn(1-\alpha)-3+3\alpha}{4mn(1-\alpha)-4}+\varepsilon}D_K^{\,\frac{m-3m\alpha}{4mn(1-\alpha)-4}+\varepsilon}\\
+x^{\frac{1-\alpha}{mn(1-\alpha)-1}}D_K^{-\frac {m(1-\alpha)}{mn(1-\alpha)-1}}.
\end{array}\]
When $mn\ge4$ and $\alpha=\frac{mn+3}{7mn-5}$, then we have
\[\Delta_K^m(x)=O_{n,m,\varepsilon}\left(x^{\frac{2mn-3}{2mn+1}+\varepsilon}D_K^{\,\frac{2m}{2mn+1}+\varepsilon}\right).\]
This proves the theorem.
\end{proof}
For $mn\ge4$ this theorem gives new results for Piltz divisor problem over number field. In particular, if we {fix} $K$ with $[K:\mathbf Q]=4$ then we improve the estimate for $\Delta_K(x)$ as follows:
\begin{corollary}
For any number field $K$ with $[K:\mathbf Q]=4$, \[\Delta_K(x)=O_{K,\varepsilon}\left(x^{\frac59+\varepsilon}\right).\]
\end{corollary}
This result is better than Bordell\`es' result.
\section{Application}
In this section we introduce some applications of our theorems. First we obtain uniform estimate for ideal counting function $I_K(x)$. From the proof of theorem \ref{cub}, we obtain the following theorem.
\begin{theorem}
\label{cubi}
For all $\varepsilon> 0$ for any fixed $0\le\beta\le\frac8{2n+5}-\varepsilon$ and $C>0$ the followings hold.
If $K$ runs through number fields with $[K:\mathbf Q]\le n$ and $D_K\le Cx^{\beta}$ then
\[\Delta_K(x)=O_{C,n,\varepsilon}\left(x^{\frac{2n-3+2\beta}{2n+1}+\varepsilon}\right).\]
\end{theorem}
The condition $D_K\le Cx^{\beta}$ is caused by the relation between the principal term and the error term.
It is well known that $I_K(x)$ is very important to estimate the distribution of relatively $r$-prime lattice points. We regard an $\ell$-tuple of ideals $(\mathfrak{a}_1, \mathfrak{a}_2,\ldots,\mathfrak{a}_\ell)$ of $\mathcal{O}_K$ as a lattice point in $K^\ell$. We say that a lattice point $(\mathfrak{a}_1, \mathfrak{a}_2,\ldots,\mathfrak{a}_\ell)$ is {\it relatively $r$-prime} for a positive integer $r$, if there exists no prime ideal $\mathfrak{p}$ such that $\mathfrak{a}_1, \mathfrak{a}_2,\ldots,\mathfrak{a}_\ell\subset \mathfrak{p}^r$. Let $V^r_\ell(x,K)$ denote the number of relatively $r$-prime lattice points $(\mathfrak{a}_1, \mathfrak{a}_2,\ldots,\mathfrak{a}_\ell)$ such that their ideal norm $\mathfrak{Na}_i\le x$.
B. D. Sittinger shows that \[V^r_\ell(x,K)\sim\frac{\rho_K^\ell}{\zeta_K(r\ell)}x^\ell,\]
where $\rho_K$ is the residue of $\zeta_K$ as $s=1$ \cite{St10}.
{It is well known that
\begin{equation}
\rho_K=\frac{2^{r_1}(2\pi)^{r_2}h_KR_K}{w_K\sqrt{D_K}},\label{crho}
\end{equation}
where $h_K$ is the class number of $K$, $R_K$ is the regulator of $K$ and $w_K$ is the number of roots of unity in $\mathcal{O}^*_K$.}
After that we show some results for the error term:\[E_\ell^r(x,K)=V_\ell^r(x,K)-\frac{\rho_K^\ell}{\zeta_K(r\ell)}x^\ell.\]
In \cite{Ta16} and \cite{tk17} we consider the relation between relatively $r$-prime problem and other mathematical problems.
If we assume the Lindel\"{o}f Hypothesis for $\zeta_K(s)$, then it holds that for all $\varepsilon> 0$
\begin{equation}
E_\ell^r(x,K)=\left\{
\begin{array}{ll}
O_{\varepsilon}\left(x^{\frac1r(\frac32+\varepsilon)}\right)&\text{ if } r\ell=2,\\
O_{\varepsilon}\left(x^{\ell-\frac12+\varepsilon}\right)&\text{ otherwise }
\end{array}
\right.
\end{equation}
From easy calculation, we obtain the following corollary.
\begin{corollary}
For all $\varepsilon> 0$ and for any fixed $0\le\beta\le\frac8{2n+5}-\varepsilon$ and $C>0$ the followings hold.
If $K$ runs through number fields with $[K:\mathbf Q]\le n$ and $D_K\le Cx^{\beta}$, then \[E_\ell^r(x,K)=\left\{\begin{array}{ll}
O_{C, n,\varepsilon}\left(x^{\frac{4n-2}{r(2n+1)}+\frac{4}{2n+1}\beta+\varepsilon}\right)& \text{ if } r\ell=2,\\
O_{C,n,\varepsilon}\left(x^{\ell-\frac {4}{2n+1}+\frac{2n+5-(2n+1)\ell}{2(2n+1)}\beta+\varepsilon}\right)& \text{ otherwise. }
\end{array}\right.\]
\end{corollary}
For the proof of this corollary, please see the proof of Theorem 4.1 of \cite{tk17}.
\section{Conjecture}
Theorem \ref{cubi} states good uniform upper bounds. It is proposed that for all number fields $K$ the best uniform upper bound of the error term is better than that on the assumption of the Lindel\" of Hypothesis (\ref{lindelof}).
\begin{conj}
If $K$ runs through number fields with $D_K<x$, then
\[\Delta_K^m(x)=o\left(x^{\frac12}\right).\]
\end{conj}
If $K$ runs through cubic extension fields with $D_K\le Cx^{\frac14-\varepsilon}$, then this conjecture holds from theorem \ref{cubi}.
From estimate (\ref{omega}), this conjecture may give the best estimate for uniform upper bound of $\Delta_K^m(x)$. As we remarked above (Theorem \ref{idealhi}) this conjecture is very difficult even when $K$ is fixed and $m=1$. | {"config": "arxiv", "file": "1804.03785.tex"} |
TITLE: Frobenius norm and submultiplicativity
QUESTION [5 upvotes]: I read (page 8 here) that if $A$ and $B$
are rectangular matrices so that the product $AB$ is defined, then
$$(1)\quad||AB||_F^2\leq ||A||_F^2||B||_F^2$$
Does that mean that the inequality above also holds when the number of rows of $A$ is larger than the number of columns of $B$? The justification (Cauchy Swartz):
$$||AB||_F^2=\sum_{i=1}^n\sum_{j=1}^k(a_i^\top b_j)^2\leq \sum_{i=1}^n\sum_{j=1}^k||a_i||_2^2||b_j||^2_2=||A||_F^2||B||_F^2$$
does not require $k$ (the number of columns of $B$) to equal $n$ (the number of rows of $A$). Intuitively, you could also add imaginary columns of 0's to $B$, so I can believe the claim. On the other hand, in other places I only see $(1)$ claimed for matrix of the same size and have had a hard time finding it claimed for the more general case (where $A$ $B$ are merely multiplicative) online.
REPLY [2 votes]: Indeed, the only requirement to have the inequality that you wrote for the Frobenius norm and for arbitrary matrices is that the product $AB$ is defined.
If you are looking for a reference see for example the book Numerical Linear Algebra, by Trefethen and Bau, page 23. | {"set_name": "stack_exchange", "score": 5, "question_id": 1642894} |
TITLE: Markov chain Bernoulli
QUESTION [0 upvotes]: I am reading the book Stochastic Processes of a brilliant greek mathematician Nikolaos Skoutaris and in pages 164-165 has the following problem:
Let $X_n, n \in \mathbb{N}_0$ independent r.v following the Bernoulli distribution with parameter $p \in (0,1)$, i.e $$\mathbb{P}(X_n=1)=p ,\mathbb{P}(X_n=0)=1-p.$$
We have a process $\{W_n\}$ where $W_n =
\begin{cases}
0, & \text{if}\quad X_n=X_{n-1}=1\\
1, & \text{otherwise}. \\
\end{cases} $
Is the process a Markov Chain ?
The writer says that the chain $\{W_n\}$ is in the state space \${0,1}$ and is not Markovian because
$$\mathbb{P}(W_n=0|W_{n-1}=1,W_{n-2}=0)= \frac{\mathbb{P}(W_n=0,W_{n-1}=1,W_{n-2}=0)}{\mathbb{P}(W_{n-1}=1,W_{n-2}=0)}=0$$
I understand that he uses the independence of the r.v to calculate the conditional probability function and ends with $\mathbb{P}(W_n=0)$, but how did he find that $\mathbb{P}(W_{n-2})=0$?
He continues by writing that if $W_n=W_{n-2}=0$, then $X_{n-1}=X_{n-2}=1$ and so $W_{n-1}=0$. On the other hand:
\begin{align*}
\mathbb{P}(W_n=0|W_{n-1}=1) &= \frac{\mathbb{P}(W_n=0,W_{n-1}=1)}{\mathbb{P}(W_{n-1}=1)} \\
&= \frac{\mathbb{P}(X_n=X_{n-1}=1,X_{n-2}=0)}{1- \mathbb{P}(X_{n-1}=X_{n-2}=1)} \quad\quad \text{(1)}\\
&= \frac{\mathbb{P}(X_n=1) \mathbb{P}(X_{n-1}=1)\mathbb{P}(X_{n-2}=0)}{1- \mathbb{P}(X_{n-1}=1)\mathbb{P}(X_{n-2}=1)}\\
&= \frac{p^2(1-p)}{1-p^2} \\
&= \frac{p^2}{1+p} \\
& \neq 0
\end{align*}
I have two questions :
How did he find that $P(W_{n-2}=0)$?
Why can he take the reciprocal in (1)?
REPLY [1 votes]: Note that the sequence $W_{n-2}=0, W_{n-1}=1$ implies that $X_{n-3}=1, X_{n-2}=1, X_{n-1}=0$. As such, it is impossible that $W_n=0$, because that would require $X_{n}=X_{n-1}=1$. Hence, $\mathbb{P}(W_n=0, W_{n-1}=1, W_{n-2}=0)=0$.
$\mathbb{P}(W_{n-1}=1)=1-\mathbb{P}(W_{n-1}=0)$, and by definition of $W_{n-1}$, you have $\mathbb{P}(W_{n-1}=0)=\mathbb{P}(X_n=X_{n-1}=1)$. So $\mathbb{P}(W_{n-1}=1)= 1- \mathbb{P}(X_n=X_{n-1}=1)$. | {"set_name": "stack_exchange", "score": 0, "question_id": 4588373} |
\section{Summary \& conclusions}
\label{sec:conclusions}
Active subspace methods enable response surface approximations of a multivariate function on a low-dimensional subspace of the domain. We have analyzed a sequence of approximations that exploits the active subspace: a best approximation via conditional expectation, a Monte Carlo approximation of the best approximation, and a response surface trained with a few Monte Carlo estimates. We have used these analyses to motivate a computational procedure for detecting the directions defining the subspace and constructing a kriging surface on the subspace. We have applied this procedure to an elliptic PDE problem with a random field model for the coefficients. We compared the active subspace method with an approach based on the local sensitivity analysis and showed the superior performance of the active subspace method.
Loosely speaking, active subspace methods are appropriate for certain classes of functions that vary primarily in low-dimensional subspaces of the input. If there is no decay in the eigenvalues of $\mC$, then the methods will perform poorly; constructing such functions is not difficult. However, we have found many high-dimensional applications in practice where the eigenvalues do decay quickly, and the functions respond well to active subspace methods~\cite{Chen2011,Dow2013,Constantine11c,Sensitive12}. Most of those applications look similar to the one presented in Section \ref{sec:example}, where uncertainty in some spatially varying physical input can be represented by a series expansion, and the coefficients of the expansion are treated as random variables; such models arise frequently in UQ.
The computational method we have proposed is ripe for improvements and extensions. We have mentioned many such possibilities in Section \ref{sec:steps}, and we are particularly interested in methods for using fewer evaluations of the gradient to compute the directions defining the active subspace. We will also pursue strategies that make better use of the function evaluations acquired during the gradient sampling. | {"config": "arxiv", "file": "1304.2070/sec5-conclusion.tex"} |
\begin{document}
\maketitle
\setcounter{secnumdepth}{2}
\setcounter{tocdepth}{1}
\addtolength{\parskip}{0.5ex}
\begin{abstract}
Consider a singular curve $\Gamma$ contained in a smooth $3$-fold $X$. Assuming the general elephant conjecture, the general hypersurface section $\Gamma\subset S\subset X$ is Du Val. Under that assumption, this paper describes the construction of a divisorial extraction from $\Gamma$ by Kustin--Miller unprojection. Terminal extractions from $\Gamma\subset X$ are proved not to exist if $S$ is of type $D_{2k}, E_7$ or $E_8$ and are classified if $S$ is of type $A_1,A_2$ or $E_6$. The $A_n$ and $D_{2k+1}$ cases are considered in a further paper.
\end{abstract}
\tableofcontents
\section*{Introduction}
Much of the birational geometry of terminal $3$-folds has been classified explicitly. For example there is a classification of terminal $3$-fold singularities by Mori and Reid \cite{ypg}, a classification of exceptional $3$-fold flips by Koll\'ar and Mori \cite{km92} and, more recently, some work done by Hacking, Tevelev and Urz\'ua \cite{htu} and Brown and Reid \cite{dip} to describe type $A$ flips. In the case of divisorial contractions much is known, but there is not currently a complete classification.
The focus of this paper is the case of a singular curve $\Gamma$ contained in a smooth $3$-fold $X$. Assuming the general elephant conjecture, the general hypersurface section $\Gamma\subset S\subset X$ is Du Val. Under this assumption, \S2 gives a normal form for the equations of $\Gamma$ in $X$ along with an outline for the construction of a (specific) divisorial extraction from $\Gamma$. In \S\S3-4 cases are studied explicitly by considering the type of Du Val singularity of $S$. I prove that if the general hypersurface through $\Gamma$ is a type $D_{2k}$ or $E_7$ singularity then a terminal divisorial extraction from $\Gamma$ does not exist. If it is a type $E_6$ singularity then there is an explicit description of the curves $\Gamma$ for which a terminal divisorial extraction exists.
As well as treating the $A_1$ and $A_2$ cases, the main result in this paper is the following:
\begin{thm}
Suppose that $P\in \Gamma\subset X$ is the germ of a non-lci curve singularity $P\in\Gamma$ inside a smooth $3$-fold $P\in X$. Suppose moreover that the general hypersurface section $\Gamma\subset S\subset X$ is Du Val. Then,
\begin{enumerate}
\item if $S$ is of type $D_{2k}$ or $E_7$ then a terminal extraction from $\Gamma\subset X$ does not exist,
\item if $S$ is of type $E_6$ then a terminal extraction exists only if $\Gamma\subset S$ is a curve whose birational transform in the minimal resolution of $S$ intersects the exceptional locus with multiplicity given by either\begin{center} \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{$1$}}] at (3,1) {$\bullet$};
\node at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (5,1) {$\bullet$};
\node at (3,2) {$\bullet$};
\draw
(1,1)--(2.925,1)
(3.075,1)--(4.925,1)
(3,1.075)--(3,2);
\node at (6,1.5) {or};
\node at (7,1) {$\bullet$};
\node at (8,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{$1$}}] at (9,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$1$}}] at (10,1) {$\bullet$};
\node at (11,1) {$\bullet$};
\node at (9,2) {$\bullet$};
\draw
(7,1)--(8.925,1)
(9.075,1)--(9.925,1)
(10.075,1)--(10.925,1)
(9,1.075)--(9,2);
\end{tikzpicture} \end{center}
(an unlabelled node means multiplicity zero).
\end{enumerate}
\end{thm}
A more precise statement is given in Theorem \ref{excthm}.
\subsubsection{Acknowledgements}\
I would like to thank my PhD supervisor Miles Reid for suggesting this problem to me and for the help and guidance he has given me.
Part of this research was done whilst visiting Japan. I would like to thank Professor Yurijo Kawamata for inviting me and Stavros Papadakis for some helpful discussions whilst I was there.
\section{Preliminaries}
\subsection{Du Val singularities} \
The Du Val singularities are a very famous class of surface singularities. They can be defined in many different ways, a few of which are given here.
\begin{defn}
Let $P\in S$ be the germ of a surface singularity. Then $P\in S$ is a \emph{Du Val singularity} if it is given, up to isomorphism, by one of the following equivalent conditions:
\begin{enumerate}
\item a hypersurface singularity $0\in V(f)\subset \Aa^3$, where $f$ is one of the equations of Table \ref{dvtab}, given by an ADE classification.
\item a quotient singularity $0\in \CC^2/G=\text{Spec }\CC[u,v]^G$, where $G$ is a finite subgroup of $\text{SL}(2,\CC)$.
\item a rational double point, i.e.\ the minimal resolution
\[\mu \colon (E\subset \widetilde{S}) \to (P\in S)\]
has exceptional locus $E=\bigcup E_i$ a tree of $-2$-curves with intersection graph given by the corresponding ADE Dynkin diagram.
\item a canonical surface singularity. As $P\in S$ is a surface singularity this is equivalent to having a crepant resolution, i.e.\ $K_{\widetilde{S}}=\mu^*K_S$.
\item a simple hypersurface singularity, i.e.\ $0\in V(f)\subset \Aa^3$ such that there exist only finitely many ideals $I\subset\frakm$ with $f\in I^2$ (where $\frakm$ is the ideal of $0\in\Aa^3$).
\end{enumerate}
\label{dvdef}
\end{defn}
See for example \cite{duval} for details of (1)-(4) and \cite{yosh} for details of (5).
\begin{table}[htdp]
\caption{Types of Du Val singularities}
\label{dvtab}
\begin{center}
\begin{tabular}{cccc}
Type & Group $G$ & Equation $f$ & Dynkin diagram \\ \hline
$A_n$ & cyclic $\tfrac1r(1,-1)$ & $x^2 + y^2 + z^{n+1}$ &
\begin{tikzpicture}[scale=0.8]
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (2,1) {$\bullet$};
\node at (3,1) {$\cdots$};
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (5,1) {$\bullet$};
\node at (4,2) { };
\node[below] at (3,1) {\tiny{($n$ nodes)}};
\draw
(1,1)--(2.5,1);
\draw
(3.5,1)--(5,1);
\end{tikzpicture} \\
$D_n$ & binary dihedral & $x^2 + y^2z + z^{n-1}$ &
\begin{tikzpicture}[scale=0.8]
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (2,1) {$\bullet$};
\node at (3,1) {$\cdots$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (5,1.5) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (5,0.5) {$\bullet$};
\node[below] at (3,1) {\tiny{($n$ nodes)}};
\draw
(1,1)--(2.5,1);
\draw
(3.5,1)--(4,1);
\draw
(4,1)--(5,1.5);
\draw
(4,1)--(5,0.5);
\end{tikzpicture} \\
$E_6$ & binary tetrahedral & $x^2 + y^3 + z^4$ &
\begin{tikzpicture}[scale=0.8]
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{3}}] at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (5,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (3,2) {$\bullet$};
\draw
(1,1)--(5,1);
\draw
(3,1)--(3,2);
\end{tikzpicture} \\
$E_7$ & binary octahedral & $x^2 + y^3 + yz^3$ &
\begin{tikzpicture}[scale=0.8]
\node[label={[label distance=-0.2cm]90:\tiny{1}}] at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{3}}] at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{4}}] at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{3}}] at (5,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (6,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (4,2) {$\bullet$};
\draw
(1,1)--(6,1);
\draw
(4,1)--(4,2);
\end{tikzpicture} \\
$E_8$ & binary isocahedral & $x^2 + y^3 + z^5$ &
\begin{tikzpicture}[scale=0.8]
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{3}}] at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{4}}] at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{5}}] at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{6}}] at (5,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{4}}] at (6,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{2}}] at (7,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{3}}] at (5,2) {$\bullet$};
\draw
(1,1)--(7,1);
\draw
(5,1)--(5,2);
\end{tikzpicture} \\
\end{tabular}
\end{center}
\end{table}
The numbers decorating the nodes of the Dynkin diagrams in Table \ref{dvtab} have several interpretations. For example, each node corresponds to the isomorphism class of a nontrivial irreducible representation of $G$ with dimension equal to the label. Another way these numbers arise is as the multiplicities of the $E_i$ in the \emph{fundamental cycle} $\Sigma\subset\widetilde{S}$ (that is, the unique minimal effective $1$-cycle such that $\Sigma \cdot E_i \leq 0$ for every component $E_i$ of $E$).
\subsection{Terminal $3$-fold singularities} \
One of the most useful lists at our disposal is Mori's list of $3$-fold terminal singularities. (See \cite{ypg} for a nice introduction.) Terminal singularities always exist in codim $\geq3$, so in the $3$-fold case they are all isolated points $P\in X$. They are classified by their index (the least $r\in\ZZ_{>0}$ such that $rD$ is Cartier, given any Weil divisor $D$ through $P\in X$).
As shown by Reid, the index 1, or Gorenstein, singularities are \emph{compound Du Val} (cDV) singularities, i.e.\ isolated hypersurface singularities of the form
\[ \big(f(x,y,z) + tg(x,y,z,t) = 0\big) \subset \Aa^4_{x,y,z,t} \]
where $f$ is the equation of a Du Val singularity.
The other cases are the non-Gorenstein singularities. They can be described as cyclic quotients of cDV points by using the index to form a covering. For example, a singularity of type $cA/r$ denotes the quotient of a type $cA$ singularity
\[ \big( xy + f(z^r, t) = 0 \big) \subset \Aa^4_{x,y,z,t} \: / \: \tfrac1r(a,r-a,1,0) \]
where $\tfrac1r(a,r-a,1,0)$ denotes the $\ZZ/r\ZZ$ group action $(x,y,z,t) \mapsto (\epsilon^ax,\epsilon^{r-a}y,\epsilon z,t)$, for a primitive $r$th root of unity $\epsilon$. The general elephant of this singularity is given by an $r$-to-1 covering $A_{n-1}\to A_{rn-1}$. A full list can be found in \cite{km92}, p.\ 541.
\subsection{Divisorial contractions} \
\begin{defn}
A projective birational morphism $\sigma\colon Y\to X$ is called a \emph{divisorial contraction} if
\begin{enumerate}
\item $X$ and $Y$ are quasiprojective $\QQ$-factorial (analytic or) algebraic varieties,
\item there exists a unique prime divisor $E\subset Y$ such that $\Gamma=\sigma(E)$ has $\text{codim}_X\Gamma\geq2$,
\item $\sigma$ is an isomorphism outside of $E$,
\item $-K_Y$ is $\sigma$-ample and the relative Picard number is $\rho({Y/X})=1$.
\end{enumerate}
Given the curve $\Gamma\subset X$, we will also call any such $\sigma\colon Y\to X$ a \emph{divisorial extraction} of $\Gamma$. Moreover, if both $X$ and $Y$ have terminal singularities (so that this map belongs in the Mori category of terminal $3$-folds) then we call $\sigma$ a \emph{Mori contraction/extraction}.
\end{defn}
Since the question of classifying divisorial contractions is local on $X$ we assume that $\sigma$ is a \emph{divisorial neighbourhood}, i.e.\ a map of $3$-fold germs
\[ \sigma \colon (Z\subset E\subset Y) \to (P\in\Gamma\subset X) \]
where $Z=\sigma^{-1}(P)$ is a (not necessarily irreducible) reduced complete curve. In practice $X$ is the germ of an affine variety over $\CC$ and it is assumed that we can make any analytic change of variables that needs to take place. In particular, as we are primarily interested in this paper with the case where $X$ is smooth, we can implicitly assume $(P\in X)\cong (0\in \Aa^3)$.
\subsubsection{Known results.}\
For $3$-folds, divisorial contractions fall into two cases:
\begin{enumerate}
\item $P=\Gamma$ is a point,
\item $P\in \Gamma$ is a curve.
\end{enumerate}
The first case has been studied intensively and is completely classified if $P\in X$ is a non-Gorenstein singularity. This follows from the work of a number of people---Corti, Kawakita, Hayakawa and Kawamata amongst others.
In either case, Mori and Cutkosky classify Mori contractions when $Y$ is Gorenstein. In particular, Cutkosky's result for a curve $\Gamma$ is the following.
\begin{thm}[Cutkosky \cite{cut}]\label{cutthm}
Suppose $\sigma\colon (E\subset Y)\to (\Gamma\in X)$ is a Mori contraction where $Y$ has at worst Gorenstein (i.e.\ index 1) singularities and $\Gamma$ is a curve. Then
\begin{enumerate}
\item $\Gamma$ is a reduced, irreducible, local complete intersection curve in $X$,
\item $X$ is smooth along $\Gamma$,
\item $\sigma$ is isomorphic to the blowup of the ideal sheaf $\sI_{\Gamma/X}$,
\item $Y$ only has $cA$ type singularities and
\item a general hypersurface section $\Gamma\subset S$ is smooth.
\end{enumerate}
\end{thm}
Kawamata \cite{kawam} classifies the case when the point $P\in X$ is a terminal cyclic quotient singularity. In this case, there is a unique divisorial extraction given by a weighted blowup of the point $\Gamma=P$. In particular, if there exists a Mori extraction to a curve $\Gamma\subset X$, then $\Gamma$ cannot pass through any cyclic quotient point on $X$.
Tziolas \cite{tz1,tz2,tz3,tz4} classifies terminal extractions when $P\in \Gamma\subset X$ is a smooth curve passing through a cDV point.
\subsection{The general elephant conjecture} \
\label{geconj}
Reid's general elephant conjecture states that, given a terminal contraction
\[ \sigma\colon (E\subset Y) \to (\Gamma \subset X), \]
then general anticanonical sections $T_Y \in|{-K}_Y|$ and $T_X=\sigma(T_Y) \in|{-K}_X|$ should have at worst Du Val singularities. Moreover, $\sigma\colon T_Y\to T_X$ should be a partial crepant resolution at $P\in T_X$.
This is proved by Koll\'ar and Mori \cite{km92} for \emph{extremal} neighbourhoods (i.e.\ ones where the central fibre $Z$ is irreducible). In almost all the examples constructed in this paper, $Z$ is reducible.
Note that the existence of a Du Val general elephant implies that the general hypersurface section $\Gamma\subset S\subset X$ is also Du Val. The construction of the divisorial extraction $\sigma\colon Y\to X$ (i.e.\ the equations and singularities of $Y$) depends upon the general section $S$ rather than the anticanonical section $T_X$. Therefore, through out this paper assume we are in the following local situation
\[P \in \Gamma \subset S \subset X \]
where $P\in\Gamma$ is a (non-lci) curve singularity, $P\in S$ is a general Du Val section and $(P\in X)\cong (0\in\Aa^3)$ is smooth.
\subsection{Uniqueness of contractions} \
The following Proposition appears in \cite{tz1} Proposition 1.2.
\begin{prop}
\label{uniq}
Suppose that $\sigma\colon Y\to X$ is a divisorial contraction that contracts a divisor $E$ to a curve $\Gamma$, that $X$ and $Y$ are normal and that $X$ has isolated singularities. Suppose further that $\sigma$ is the blowup over the generic point of $\Gamma$ in $X$ and that $-E$ is $\sigma$-ample. Then $\sigma\colon Y\to X$ are uniquely determined and isomorphic to
\[ \Bl_\Gamma\colon\Proj_X \bigoplus_{n\geq 0} \sI^{[n]} \to X \]
where $\sI^{[n]}$ is the $n$th symbolic power of the ideal sheaf $\sI=\sI_{\Gamma/X}$, i.e.\ the blowup of the symbolic power algebra of $\sI$.
\end{prop}
\begin{proof}
Pick a relatively ample Cartier divisor class $D$ on $Y$ which must be a rational multiple of $\sO_Y(-E)$. Then
\[ Y = \Proj_X R(Y,D) \]
and, up to truncation, this is the ring $R(Y,-E)$.
Now the result follows from the claim that $\sigma_*\sO_Y(-nE)$ is the $n$th symbolic power of $\sI$. This is clear at the generic point of $\Gamma$, since we assume it is just the blowup there. Now $\sigma_*\sO_Y = \sO_X$ is normal, and $\sO_Y(-nE)\subset \sO_Y$ is the ideal of functions vanishing $n$ times on $E$ outside of $\sigma^{-1}( P)$. So $\sO_X/\sigma_*\sO_Y(-nE)$ has no associated primes other than $\Gamma$ and this proves the claim.
\end{proof}
\begin{rmk}
Suppose $\sigma\colon Y\to X$ is a terminal divisorial contraction. By Mori's result, $Y$ is the blowup over the generic point of $\Gamma$ and we are in the setting of the theorem. Therefore a terminal contraction is unique if it exists, although there may be many more canonical contractions to the same curve.
\end{rmk}
From this result, it is also easy to see that Cutkosky's result, Theorem \ref{cutthm}, holds for divisorial extractions, as well as contractions.
\begin{lem}
Suppose that $\Gamma$ is a local complete intersection curve in a $3$-fold $X$ and that $X$ is smooth along $\Gamma$. Then a Mori extraction exists iff\/ $\Gamma$ is reduced, irreducible and a general hypersurface section $\Gamma\subset S$ is smooth.
\end{lem}
\begin{proof}
By Proposition \ref{uniq}, if a Mori extraction $\sigma\colon Y\to X$ exists then $\sigma$ is isomorphic to the blowup of the ideal sheaf $\sI_{\Gamma/X}$. As $\Gamma$ is lci then, locally at a point in $P\in\Gamma\subset X$, we have that $\Gamma$ is defined by two equations $f,g$. Hence $Y$ is given by
\[ Y = \{ f\eta - g\xi = 0 \} \subset X\times \PP^1_{(\eta:\xi)} \to X \]
If both $f,g\in\frakm^2$ then at any point $Q$ along the central fibre $Z=\sigma^{-1}(P)_{\text{red}}$ the equation defining $Y$ is contained in $\frakm_Q^2$. Therefore $Y$ is singular along $Z$ and hence not terminal. So at least one of $f,g$ is the equation of a smooth hypersurface, say $f\in\frakm\setminus \frakm^2$. Now $Y$ is smooth along $Z$ except for a possible $cA$ type singularity at the point $P_\xi\in Y$, where all variables except $\xi$ vanish.
\end{proof}
\subsection{Unprojection}\label{unproj} \
In this paper, divisorial contractions are constructed by Kustin--Miller unprojection. The general philosophy of unprojection is to start working explicitly with Gorenstein rings in low codimension, successively adjoining new variables with new equations. For more details on Type I unprojection and Tom \& Jerry, see e.g.\ \cite{par,bkr,kino}.
All unprojections appearing in this paper are Gorenstein Type I unprojections. A point $Q\in Y$ on a $3$-fold is called a \emph{Type I centre} if we can factor the projection map $Q\in Y\dashrightarrow \Pi\subset Y'$ as
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=1em,column sep=2em,minimum width=2em]
{
& E\subset Z & \\
Q\in Y & & \Pi\subset Y' \\};
\path[->] (m-1-2) edge node [above] {$\phi$} (m-2-1);
\path[->] (m-1-2) edge node [above] {$\psi$} (m-2-3);
\end{tikzpicture}
\end{center}
where $\phi$ is a divisorial extraction from the point $Q\in Y$ with exceptional divisor $E\subset Z$, $\psi$ is a small birational anticanonical morphism mapping $E$ birationally to a divisor $\Pi\subset Y'$ such that both $Y'$ and $\Pi$ are projectively Gorenstein. The map $Y'\dashrightarrow Y$ is called the \emph{unprojection map}. The point is that under these conditions the Kustin--Miller unprojection of the divisor $\Pi\subset Y'$ (described in \cite{par} Theorem 1.5) reconstructs $Y$, so that $Y$ can be obtained by adjoining just one new variable $u$ to the graded ring defining $Y'$, with a systematic way of obtaining the equations involving $u$ (in practice it is usually easy to work them out by ad hoc methods).
Another important idea appearing in these calculations is the structure of Gorenstein rings in codimension 3. By a theorem of Buchsbaum and Eisenbud, the equations of such a ring can be written as the maximal Pfaffians of a skew-symmetric $(2k+1)\times (2k+1)$ matrix. In practice we can usually always take $5\times 5$ matrices.
Now suppose that $Y'$ is a $3$-fold in codimension 3, given by the maximal Pfaffians of a $5\times 5$ skew matrix $M$. \emph{Tom \& Jerry} are the names of two different restrictions on $M$ that are necessary for $Y'$ to contain a plane $\Pi$, defined by an ideal $I$. These are:
\begin{enumerate}
\item Tom$_i$---all entries of $M$ except the $i$th row and column belong to $I$,
\item Jer$_{ij}$---all entries of $M$ in the $i$th and $j$th rows and columns belong to $I$.
\end{enumerate}
The easiest way to understand all of this is to work through the example given in \S\ref{prokreid}, with a geometrical explanation given in Remark \ref{geom}.
\section{Curves in Du Val Singularities}
Let $\Gamma$ be a reduced and irreducible curve passing through a Du Val singularity $(P\in S)$. Consider $S$ as simultaneously being both the hypersurface singularity $0\in V(f)\subset \Aa^3$, as in Definition \ref{dvdef}(1), and the group quotient $\pi\colon\CC^2\to\CC^2/G$, as in Definition \ref{dvdef}(2). Write $S=\text{Spec }\sO_S$ where
\[ \sO_S = \sO_X/(f)=(\sO_{\CC^2})^G, \quad \sO_X=\CC[x,y,z], \quad \sO_{\CC^2}=\CC[u,v]. \]
The aim of this section is to describe the equations of $\Gamma\subset X\cong \Aa^3$ in terms of some data associated to the equation $f$ and the group $G$.
\subsection{A 1-dimensional representation of $G$} \
Consider $C := \pi^{-1} (\Gamma) \subset \CC^2$, the preimage of $\Gamma$ under the quotient map $\pi$. Then $C$ is a reduced (but possibly reducible) $G$-invariant curve giving a diagram
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=1em,column sep=2em,minimum width=2em]
{
C & \CC^2_{u,v} \\
\Gamma & S \\};
\path[->] (m-1-1) edge (m-2-1);
\path[->] (m-1-2) edge node [right] {$\pi$} (m-2-2);
\path[right hook->] (m-1-1) edge (m-1-2);
\path[right hook->] (m-2-1) edge (m-2-2);
\node [left=4pt] at (m-2-1) {$P\in$};
\end{tikzpicture}
\end{center}
As such, $C$ is defined by a single equation $V(\gamma)\subset\CC^2$ and this $\gamma(u,v)$ is called the \emph{orbifold equation} of $\Gamma$. As $C$ is $G$-invariant the equation $\gamma$ must be $G$-semi-invariant, so there is a 1-dimensional representation $\rho\colon G\to\CC^\times$ such that
\[ {^g\gamma}(u,v) = \rho(g) \gamma(u,v), \quad\quad \forall g\in G. \]
Moreover, $\Gamma$ is a Cartier divisor (and hence lci in $X$) if and only if $\rho$ is the trivial representation. Let us restrict attention to nontrivial $\rho$.
As can be seen from Table \ref{dvtab}, there are $n$ such representations if $S$ is type $A_n$, three if type $D_n$, two if type $E_6$, one if type $E_7$ and none if type $E_8$. These possibilities are listed later on in Table \ref{mftab}.
\subsection{A matrix factorisation of $f$} \
As is well known from the McKay correspondence, the ring $\sO_{\CC^2}$ has a canonical decomposition as a direct sum of $\sO_S$-modules
\[ \sO_{\CC^2} = \bigoplus_{\rho\in \text{Irr}(G)} M_\rho \]
where $M_\rho = V_\rho \otimes \text{Hom}(V_\rho, \sO_{\CC^2})^G$ and $\text{Irr}(G)$ is the set of irreducible $G$-representations $\rho\colon G\to\text{GL}(V_\rho)$. In particular if $\dim\rho=1$ then we see that $M_\rho$ is the unique irreducible summand of $\sO_{\CC^2}$ of $\rho$ semi-invariants
\[ M_\rho=\big\{h(u,v)\in\sO_{\CC^2} : {^gh}=\rho(g)h \big\}. \]
This is a rank 1 maximal Cohen-Macaulay $\sO_S$-module generated by two elements at $P$.
As shown by Eisenbud \cite{eis}, such a module over the ring of a hypersurface singularity has a minimal free resolution which is 2-periodic, i.e.\ there is a resolution
\[ \begin{matrix}
M_\rho & \leftarrow & \sO_S^{\oplus2} & \stackrel{\phi}{\longleftarrow} & \sO_S^{\oplus2} & \stackrel{\psi}{\longleftarrow} & \sO_S^{\oplus2} & \stackrel{\phi}{\longleftarrow} & \cdots
\end{matrix} \]
where $\phi$ and $\psi$ are matrices over $\sO_X$ satisfying
\[ \phi\psi = \psi\phi = fI_2.\]
The pair of matrices $(\phi,\psi)$ is called a \emph{matrix factorisation} of $f$. In our case $\phi$ and $\psi$ are $2\times 2$ matrices. It is easy to see that $\det\phi=\det\psi=f$ and that $\psi$ is the adjugate matrix of $\phi$. Write $I(\phi)$ for the ideal of $\sO_X$ generated by the entries of $\phi$ (or equivalently $\psi$).
Write $\epsilon_k$ (resp.\ $\omega,i$) for a primitive $k$th (resp.\ 3rd, 4th) root of unity. In Table \ref{mftab} the possible representations $\rho$ of $G$ and the first matrix $\phi$ in a matrix factorisation of $M_\rho$, for some choice of $f$, are listed. These can be found, for instance, in \cite{kst} \S5.
\begin{table}[htdp]
\caption{1-dimensional representations of $G$}
\label{mftab}
\begin{center}
\begin{tabular}{cccc}
Type & Presentation of $G$ & $\rho(r ), \big(\rho(s),\rho(t)\big)$ & $\phi$ \\ \hline \\
$\bA_{n}^j$ & $\left\langle r : r^{n+1}=e \right\rangle$ & $\epsilon_{n+1}^j$ & $\begin{pmatrix} x & y^j \\ y^{n+1-j} & z \end{pmatrix}$ \\ \\
$\bD_{n}^l$ & $\left\langle \begin{matrix} r,s,t : \\ r^{n-2}=s^2=t^2=rst \end{matrix} \right\rangle$ & $1,-1,-1$ & $\begin{pmatrix} x & y^2 + z^{n-2} \\ z & x\end{pmatrix}$ \\ \\
$\bD_{2k}^r$ & & $-1,1,-1$ & $\begin{pmatrix} x & yz + z^k \\ y & x\end{pmatrix}$ \\ \\
$\bD_{2k+1}^r$ & & $-1,i,-i$ & $\begin{pmatrix} x & yz \\ y & x+z^k \end{pmatrix}$ \\ \\
$\bE_6$ & $\left\langle \begin{matrix} r,s,t : \\ r^2=s^3=t^3=rst \end{matrix} \right\rangle$ & $1,\omega,\omega^2$ & $\begin{pmatrix} x & y^2 \\ y & x + z^2 \end{pmatrix}$ \\ \\
$\bE_7$ & $\left\langle \begin{matrix} r,s,t : \\ r^2=s^3=t^4=rst \end{matrix} \right\rangle$ & $-1,1,-1$ & $\begin{pmatrix} x & y^2+z^3 \\ y & x \end{pmatrix}$
\end{tabular}
\end{center}
\end{table}
The notation $\bD_n^l$ refers to the case when $\rho$ is the 1-dimensional representation corresponding to the leftmost node in the $D_n$ Dynkin diagram (see Table \ref{dvtab}) and $\bD_n^r$ refers to one of the rightmost pair of nodes. Of course there are are actually two choices of representation we could take for each of the cases $\bD_{2k}^r,\bD_{2k+1}^r$ and $\bE_6$. However we treat each of them as only one case since there is an obvious symmetry of $S$ switching the two types of curve. Similarly for $\bA_n^j$ we can assume that $j\leq\tfrac{n+1}2$.
\subsection{Normal forms for $\Gamma\subset X$}\
\begin{lem} Suppose that we are given $P\in\Gamma\subset S\subset X$ as in \S\ref{geconj}. Let $\rho$ and $\phi$ be the representation of $G$ and matrix factorisation of $f$ associated to $\Gamma$. Then
\begin{enumerate}
\item the equations of $\Gamma\subset X$ are given by the minors of a $2\times 3$ matrix
\[ \bigwedge^2 \begin{pmatrix} \: \phi & \begin{matrix} g \\ h \end{matrix} \end{pmatrix} = 0 \]
for some functions $g,h\in\sO_X$.
\item Suppose furthermore that $S$ is a \emph{general} Du Val section containing $\Gamma$. Then $g,h\in I(\phi)$.
\end{enumerate}
\label{Lem1}
\end{lem}
\begin{proof}
Suppose that $\rho$ is a 1-dimensional representation of $G$. Note that if $(\psi,\phi)$ is a matrix factorisation for $M_\rho$, the $\sO_S$-module of $\rho$ semi-invariants, then $(\phi,\psi)$ is a matrix factorisation for $M_{\rho'}$, where $\rho'$ is the representation $\rho'(g)=\rho(g)^{-1}$.
The resolution of the $\sO_{\CC^2}$-module $\sO_C = \sO_{\CC^2}/(\gamma)$
\[ \begin{matrix}
\sO_C & \leftarrow & \sO_{\CC^2} & \stackrel{\gamma}{\longleftarrow} & \sO_{\CC^2} & \leftarrow & 0
\end{matrix} \]
decomposes as a resolution over $\sO_S$ to give a resolution of $\sO_\Gamma$
\[ \begin{matrix}
\sO_\Gamma & \leftarrow & \sO_S & \stackrel{\gamma}{\longleftarrow} & M_{\rho'} & \leftarrow & 0.
\end{matrix} \]
Using the resolution of $M_{\rho'}$ we get
\[ \begin{matrix}
\sO_\Gamma & \leftarrow & \sO_S & \xleftarrow{(\xi_2 \: -\xi_1)} & \sO_S^{\oplus2} & \stackrel{\phi}{\longleftarrow} & \sO_S^{\oplus2} & \stackrel{\psi}{\longleftarrow} & \cdots,
\end{matrix} \]
where $\xi_1,\xi_2$ are the two equations defining $\Gamma\subset S$. Now write $\gamma=g\alpha + h\beta$ where $\alpha,\beta$ are the two generators of $M_{\rho}$. We can use the resolution of $\sO_S$ as an $\sO_X$-module to lift this to a complex over $\sO_X$ and strip off the initial exact part to get the resolution
\[ \begin{matrix}
\sO_\Gamma & \leftarrow & \sO_X & \xleftarrow{(\xi_2 \: -\xi_1 \: \eta)} & \sO_X^{\oplus3} &
\xleftarrow{ \tiny \begin{pmatrix} \phi \\ \: g \:\: h \: \end{pmatrix}} & \sO_X^{\oplus2} & \leftarrow & 0
\end{matrix} \]
(possibly modulo some unimportant minus signs). Therefore the equations of the curve $\Gamma\subset S\subset X$ are given as claimed in (1).
To prove Lemma \ref{Lem1}(2), recall the characterisation of Du Val singularities in Definition \ref{dvdef}(5) as simple singularities. Let $\eta=\det{\phi}$ and $\xi_1,\xi_2$ be the three equations of $\Gamma$. We have a $\CC^2$-family of hypersurface sections through $\Gamma$ given by
\[ H_{\lambda,\mu} = \big\{ h_{\lambda,\mu} := \eta+\lambda \xi_1 +\mu \xi_2=0 \big\}_{(\lambda,\mu)\in\CC^2} \]
and we are assuming that $\eta$ is general. As the general member $H_{\lambda,\mu}$ is Du Val there are a finite number of ideals $I\subset\frakm$ such that the general $h_{\lambda,\mu}\in I^2$. As the general section $\eta$ satisfies $\eta\in I(\phi)^2$ we have that $h_{\lambda,\mu}\in I(\phi)^2$ for general $\lambda,\mu$. Therefore $g,h\in I(\phi)$.
\end{proof}
\begin{rmk}
Whilst Lemma $\ref{Lem1}$ gives a necessary condition, $g,h\in I(\phi)$, for a general section of a curve $\Gamma$ to be of the same type as $S$, it is not normally a sufficient condition.
\end{rmk}
\subsection{The first unprojection}\
\label{1st_unproj}
Now $\Gamma$ is defined as the minors of a $2\times3$ matrix, where all the entries belong to an ideal $I(\phi)\subset\sO_X$. Cramer's rule tells us that this matrix annihilates the vector of the equations of $\Gamma$
\[ \begin{pmatrix} \: \phi & \begin{matrix} g \\ h \end{matrix} \end{pmatrix} \begin{pmatrix} \xi_2 \\ -\xi_1 \\ \eta\end{pmatrix} = 0. \]
Multiplying out these two matrices gives us two syzygies holding between the equations of $\Gamma$ and these syzygies define a variety
\[ \sigma' \colon Y'\subset X\times \PP^2_{(\eta:\xi_1:\xi_2)} \to X. \]
$Y'$ is the blowup of the (ordinary) power algebra $\bigoplus_{n\geq 0} \sI^n$ of the ideal $\sI=\sI_{\Gamma/X}$.
$Y'$ cannot be the divisorial extraction of Theorem \ref{uniq} since the fibre above $P\in X$ is not 1-dimensional. Indeed $Y'$ contains a Weil divisor $\Pi=\sigma'^{-1}(P)_{\text{red}}\cong \PP^2$, possibly with a non reduced structure, defined by the ideal $I(\phi)$. Our aim is to construct the divisorial extraction by birationally contracting $\Pi$. This is done by unprojecting $I(\phi)$ and repeating this process for any other divisors that appear in the central fibre.
\begin{lem}
Suppose there exists a Mori extraction $\sigma\colon (E\subset Y) \to (\Gamma\subset X)$. Then at least one of $g,h$ is not in $\frakm\cdot I(\phi)$.
\label{Lem2}
\end{lem}
\begin{proof}
Suppose that both $g,h \in \frakm\cdot I(\phi)$. Then the three equations of $\Gamma$ satisfy $\eta\in I(\phi)^2$ and $\xi_1,\xi_2\in\frakm\cdot I(\phi)^2$. On the variety $Y$ there is a point $Q=Q_\eta\in Y$ in the fibre above $P$ where all variables except $\eta$ vanish. Now $x,y,z,\xi_1,\xi_2$ are all linearly independent elements of the Zariski tangent space $T_QY=(\frakm_Q/\frakm_Q^2)^\vee$. This $Q\in Y$ is a Gorenstein point with $\dim T_QY\geq5$, so $Q\in Y$ cannot be cDV and is therefore not terminal.
\end{proof}
This condition gives an upper bound on the multiplicity of $\Gamma$ at $P\in X$.
\section{Divisorial Extractions from Singular Curves: Type $A$}
In the absence of any kind of structure theorem for the general $\bA_n^j$ case, I give some examples to give a flavour of the kind of behaviour that occurs. As seen in other problems, for instance Mori's study of Type $A$ flips or the Type $A$ case of Tziolas' classification \cite{tz3}, this will be a big class of examples with lots of interesting and complicated behaviour. These varieties are described as serial unprojections, existing in arbitrarily large codimension, and look very similar to Brown and Reid's Diptych varieties \cite{dip} also constructed by serial unprojection.
The general strategy is to use Lemma \ref{Lem1} to write down the equations of the curve $\Gamma\subset X$, possibly using Lemma \ref{Lem2} and some extra tricks to place further restrictions on the functions $g,h$. Then, as described in \S\ref{1st_unproj}, we can take the unprojection plane $\Pi\subset Y'$ as our starting point and repeatedly unproject until we obtain a variety $\sigma\colon Y\to X$ with a small (i.e.\ 1-dimensional) fibre above $P$. This is the unique extraction described by Theorem \ref{uniq} so checking the singularities of $Y$ will establish the existence of a terminal extraction.
\subsection{Prokhorov and Reid's example} \
\label{prokreid}
I run through the easiest case in detail as an introduction to how these calculations work. This example first appeared in \cite{pr} Theorem 3.3 and a similar example appears in Takagi \cite{tak} Proposition 7.1.
Suppose that a general section $P\in\Gamma\subset S\subset X$ is of type $A_1$ (i.e.\ the case $\bA_1^1$ in the notation of Table \ref{mftab}). By Lemma \ref{Lem1} we are considering a curve $\Gamma\subset S\subset X$ given by the equations
\[ \bigwedge^2 \begin{pmatrix}
x & y & -g(y,z) \\
y & z & h(x,y)
\end{pmatrix} = 0 \]
where the minus sign is chosen for convenience and we can use column operations to eliminate any occurrence of $x$ (resp.\ $z$) from $g$ (resp.\ $h$). Moreover $g,h\in I(\phi)=\frakm$ so we can write $g=cy+dz$ and $h=ax+by$ for some choice of functions $a,b,c,d\in\sO_X$.
By Lemma \ref{Lem2} at least one of $a,b,c,d\not\in\frakm$ else the divisorial extraction is not terminal. This implies that $\Gamma$ has multiplicity three at $P$. If we consider $S$ as the quotient $\CC^2_{u,v}/\ZZ_2$, where $x,y,z=u^2,uv,v^2$, then $\Gamma$ is given by the orbifold equation
\[ \gamma(u,v) = au^3+bu^2v+cuv^2+dv^3 \]
and the tangent directions to the branches of $\Gamma$ at $P$ correspond to the three roots of this equation.
Recall \emph{Cramer's rule} in linear algebra: that any $n\times(n+1)$ matrix annihilates the associated vector of $n\times n$ minors.
In our case this gives two syzygies between the equations of $\Gamma\subset X$
\begin{equation} \begin{pmatrix}
x & y & -(cy + dz) \\
y & z & ax + by
\end{pmatrix} \begin{pmatrix}
\xi_2 \\ -\xi_1 \\ \eta
\end{pmatrix} = 0
\tag{$*$}\label{eqns}\end{equation}
where $\eta=xz-y^2$ is the equation of $S$ and $\xi_1,\xi_2$ are the other two equations of $\Gamma$. We can write down a codimension 2 variety
\[ \sigma' \colon Y'\subset X\times\PP^2_{(\xi_1:\xi_2:\eta)} \to X\]
where $\sigma'$ is the natural map given by substituting the equations of $\Gamma$ back in for $\xi_1,\xi_2,\eta$. Outside of $P$ this map $\sigma'$ is isomorphic to the blowup of $\Gamma$, in fact $Y'$ is the blowup of the \emph{ordinary power algebra} $\bigoplus \sI^n$. However $Y'$ cannot be the unique divisorial extraction described in Theorem \ref{uniq} since the fibre over the point $P$ is not small. Indeed, $Y'$ contains the plane $\Pi:=\sigma'^{-1}(P )_{\text{red}}\cong \PP^2$.
We can unproject $\Pi$ by rewriting the equations of $Y'$ \eqref{eqns} so that they annihilate the ideal $(x,y,z)$ defining $\Pi$,
\[ \begin{pmatrix}
\xi_2 & \xi_1 + c\eta & -d\eta \\
-a\eta & \xi_2 + b\eta & \xi_1 \\
\end{pmatrix} \begin{pmatrix}
x \\ -y \\ z \end{pmatrix} = 0. \]
By using Cramer's rule again, we see that $Y'$ has some nodal singularities along $\Pi$ where $x,y,z$ and the minors of this new $2\times 3$ matrix all vanish. If the roots of $\gamma$ are distinct then this locus consists of three ordinary nodal singularities along $\Pi$. If $\gamma$ acquires a double (resp.\ triple) root then two (resp.\ three) of these nodes combine to give a slightly worse nodal singularity.
We can resolve these nodes by introducing a new variable $\zeta$ that acts as a ratio between these two vectors, i.e.\ $\zeta$ should be a degree 2 variable satisfying the three equations
\begin{align*} x\zeta &= \xi_1(\xi_1+c\eta)+d(\xi_2+b\eta)\eta, \\
y\zeta &= \xi_1\xi_2 - ad\eta^2, \\
z\zeta &= \xi_2(\xi_2+b\eta) + a(\xi_1+c\eta)\eta. \end{align*}
This all gives a codimension 3 variety $\sigma\colon Y \subset X\times \PP(1,1,1,2) \to X$ defined by five equations. As described in \S\ref{unproj}, by the Buchsbaum--Eisenbud theorem we can write these equations neatly as the maximal Pfaffians of the skew-symmetric $5\times 5$ matrix
\[ \begin{pmatrix}
\zeta & \xi_2 & \xi_1 + c\eta & -d\eta \\
& -a\eta & \xi_2 + b\eta & \xi_1 \\
& & z & y \\
& & & x
\end{pmatrix} \]
(where the diagonal of zeroes and antisymmetry are omitted for brevity).
Now we can check that $Y$ actually is the divisorial extraction from $\Gamma$. Outside of the central fibre $Y$ is still the blowup of $\Gamma$, since
\[ Y\setminus \sigma^{-1}(P)\cong Y'\setminus \sigma'^{-1}(P). \]
The plane $\Pi\subset Y'$ is contracted to the coordinate point $Q_\zeta\in Y$ where all variables except $\zeta$ vanish. ($Q_\zeta$ is called the \emph{unprojection point} of $Y$ since the map $Y\dashrightarrow Y'$ is projection from $Q_\zeta$.) The central fibre is the union of (at most) three lines, all meeting at $Q_\zeta\in Y$. Therefore $\sigma$ is small and, by Theorem \ref{uniq} on the uniqueness of contractions, this has to be the divisorial extraction from $\Gamma$.
Furthermore we can check that $Y$ is terminal. First consider an open neighbourhood of the unprojection point $(Q_\zeta\in U_\zeta):=\{\zeta=1\}$. We can eliminate the variables $x,y,z$ to see that this open set is isomorphic to the cyclic quotient singularity
\[ (Q_\zeta\in U_\zeta)\cong (0\in \CC^3_{\xi_1,\xi_2,\eta})/\tfrac12(1,1,1). \]
Now for each line $L\subseteq \sigma^{-1}(P)_{\text{red}}$ we are left to check the point $Q_L=L\cap\{\zeta=0\}$. Note that each of these points lies in the affine open set $U_\eta=\{\eta=1\}$ and recall that at least one of the coefficients $a,b,c,d$ is a unit. After a possible change of variables, we may assume $a\notin\frakm$. We can use the equations involving $a$ above to eliminate $x$ and $\xi_1$. After rewriting $\zeta=a\zeta', \xi_2=a\xi'_2$, we are left with the equation of a hypersurface
\[ \big((y - z\xi'_2)\zeta' + a{\xi'_2}^3 + b{\xi'_2}^2 + c\xi'_2 + d = 0\big) \subset \Aa^4_{y,z,\xi'_2,\zeta'} \]
which is smooth (resp.\ $cA_1,cA_2$) at $Q_L$ if $L$ is the line over a node corresponding to a unique (resp.\ double, triple) root of $\gamma$.
If we consider the case where all of $a,b,c,d\in\frakm$ then the central fibre consists of just one line $L$ and the point $Q_L\in Y$ is not terminal (the matrix defining $Y$ has rank 0 at this point, so it cannot be a hyperquotient) which agrees with Lemma \ref{Lem2}.
\begin{rmk}\label{geom}
The following construction, originally due to Hironaka, illustrates how the unprojection of $\Pi$ works geometrically.
Consider the variety $X'$ obtained by the blowup of $P\in X$ followed by the blowup of the birational transform of $\Gamma$. The exceptional locus has two components $\Pi_{X'}$ and $E_{X'}$ dominating $P$ and $\Gamma$ respectively. Assuming the tangent directions of the branches of $\Gamma$ at $P$ are distinct then $\Pi_{X'}$ is a Del Pezzo surface of degree 6. Consider the three $-1$-curves of $\Pi_{X'}$ that don't lie in the intersection $\Pi_{X'}\cap E_{X'}$. They have normal bundle $\sO_{X'}(-1,-1)$ so we can flop them. The variety $Y'$, constructed above, is the midpoint of this flop and we end up with the following diagram,
\begin{center}
\begin{tikzpicture}
\node at (0,0) {$X$};
\node at (1,1) {$X'$};
\node at (2,0) {$Y'$};
\node at (3,1) {$Z$};
\node at (4,0) {$Y$};
\path[->] (.75,.75) edge (.25,.25);
\path[->] (1.25,.75) edge (1.75,.25);
\path[->] (2.75,.75) edge (2.25,.25);
\path[->] (3.25,.75) edge (3.75,.25);
\path[dashed,->] (1.5,1) edge node[above] {\tiny{flop}} (2.75,1);
\end{tikzpicture}.
\end{center}
The plane $\Pi\subset Y$ is the image of $\Pi_{X'}$ with the three nodes given by the contracted curves. After the flop the divisor $\Pi_{X'}$ becomes a plane $\Pi_Z\cong \PP^2$ with normal bundle $\sO_Z(-2)$, so we can contract it to get $Y$ with a $\tfrac12$-quotient singularity. If we want to consider non-distinct tangent directions then this picture becomes more complicated.
\end{rmk}
\begin{rmk}
Looking back at \eqref{eqns} one may ask what happens if we unproject the ideal $(\xi_1,\xi_2,\eta)\subset \sO_{Y'}$ or, equivalently, the Jer$_{12}$ ideal $(\xi_1,\xi_2,\eta,\zeta)\subset\sO_Y$. Even though this may not appear to make sense geometrically, it is a well-defined operation in algebra. If we do then we introduce a variable $\iota$ of weight $-1$ that is nothing other than the inclusion $\iota \colon \sI_\Gamma \hookrightarrow \sO_X$. The whole picture is a big graded ring
\[ \sR := \sO_X(-1,1,1,1,2) / (\text{codim 4 ideal}) \]
and writing $\sR_+$ (resp.\ $\sR_-$) for the positively (resp.\ negatively) graded part of $\sR$ we can construct the divisorial extraction in the style of \cite{whatis}, as the Proj of a $\ZZ$-graded algebra
\begin{center}
\begin{tikzpicture}
\node at (0.5,1) {$\Proj_{\sO_X} \sR_-$};
\node at (3,0) {$X= \Spec \sO_X$};
\node at (6,1) {$Y = \Proj_{\sO_X} \sR_+$};
\draw[double, double distance = 2pt] (1.5,.8) -- (2,.3);
\path[->] (4.5,.8) edge node [right] {$\sigma$} (4,.3);
\end{tikzpicture}.
\end{center}
\end{rmk}
\begin{rmk}
The unprojection variable $\zeta$ corresponds to a generator of $\bigoplus\sI^{[n]}$ that lies in $\sI^{[2]}\setminus \sI^2$. Either by writing out one of the equations involving $\zeta$ and substituting for the values of $\xi_1,\xi_2,\eta$, or by calculating the unprojection equations of $\iota$, we can give an explicit expression for $\zeta$ as
\[ \iota\zeta = (ax+by)\xi_1 + (cy+dz)\xi_2 + (acx+ady+bcy+bdz)\eta. \]
In terms of the orbifold equation $\gamma$, the generators $\xi_1,\xi_2,\zeta$ are lifts modulo $\eta$ of the forms $u\gamma,v\gamma,\gamma^2$ defined on $S$.
\end{rmk}
\subsection{The $\bA_2^1$ case}\
Suppose that the general section $P\in\Gamma\subset S\subset X$ is of type $\bA_2^1$. By Lemma \ref{Lem1}, we are considering the curve given by the equations
\[ \bigwedge^2 \begin{pmatrix}
x & y & -(dy+ez) \\
y^2 & z & ax+by
\end{pmatrix} = 0 \]
for some choice of functions $a,b,d,e\in\sO_X$. If $a,b,d,e$ are taken generically then the general section through $\Gamma$ is of type $A_1$, so we need to introduce some more conditions on these functions.
Consider the section $H_{\lambda,\mu} = \{ h_{\lambda,\mu} := \eta+\lambda\xi_1+\mu\xi_2 =0\}$. The quadratic term of this equation is given by
\[ h^{(2)}_{\lambda,\mu} = xz + \lambda x(a_0x+b_0y) + \mu (a_0xy+b_0y^2+d_0yz+e_0z^2) \]
where $a_0$ is the constant term of $a$ and similarly for $b,d,e$. To ensure the general section is of type $A_2$ it is enough to ask that $h_{\lambda,\mu}^{(2)}$ has rank 2 for all $\lambda,\mu$. After playing around, completing the square etc., we get two cases according to whether $x\mid h_{\lambda,\mu}^{(2)}$ or $z\mid h_{\lambda,\mu}^{(2)}$:
\begin{align*}
a_0=b_0=0 &\implies h_{\lambda,\mu}^{(2)} = z(x + \mu d_0y + \mu e_0z), \\
b_0=d_0=e_0=0 &\implies h_{\lambda,\mu}^{(2)} = x(z + \lambda a_0x + \mu a_0y).
\end{align*}
\subsubsection{Case 1---Tom$_1$}\
Take the first case where $a_0=b_0=0$. Then we can rewrite $ax+by$ as $ax^2+bxy+cy^2$, so that the equations of $\Gamma$ become
\[ \bigwedge^2 \begin{pmatrix}
x & y & -\left(dy+ez\right) \\
y^2 & z & ax^2+bxy+cy^2
\end{pmatrix} = 0. \]
\emph{Claim:} The following two conditions must hold
\begin{enumerate}
\item one of $a,b,c,d\notin\frakm$,
\item one of $d,e\notin\frakm$,
\end{enumerate}
and (after possibly changing variables) we can assume that $a,e\notin\frakm$.
Statement (2) follows from Lemma \ref{Lem2}. The first is also proved in a similar way. If (1) does not hold then necessarily $e\notin\frakm$ by (2). Consider the point $Q_\eta\in Y$ where all variables but $\eta$ vanish, as in the proof of Lemma \ref{Lem2}. This is a Gorenstein point with local equation
\[ ey^2\xi_1 - x\xi_1\xi_2 + y\xi_2^2 + dy\xi_2 + e(ax^2+bxy + cy^2) = 0 \]
and if $a,b,c,d\in\frakm$ then this equation is not cDV as no terms of degree $2$ appear, so it is not terminal.
By considering the minimal resolution $\widetilde{S}\to S$, we see that any $\Gamma$ that satisfies these conditions is a curve whose birational transform $\widetilde{\Gamma}\subset\widetilde{S}$ intersects the exceptional locus with multiplicities
\begin{center} \begin{tikzpicture}
\node[label={[label distance=-0.2cm]90:\tiny{$3$}}] at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$1$}}] at (2,1) {$\bullet$};
\draw
(1.075,1)--(1.925,1);
\end{tikzpicture}, \end{center}
i.e.\ $\widetilde{\Gamma}$ intersects $E_1=\PP^1_{(x_1:x_2)}$ with multiplicity three and $E_2=\PP^1_{(y_1:y_2)}$ with multiplicity one, according to the (nonzero) equations
\begin{align*}
\widetilde{\Gamma}\cap E_1 &\colon \quad a_0x_1^3 + b_0x_1^2x_2 + c_0x_1x_2^2 + d_0x_2^3 = 0, \\
\widetilde{\Gamma}\cap E_2 &\colon \quad d_0 y_1 + e_0 y_2 = 0.
\end{align*}
If we follow the Prokhorov--Reid example, we can write down a codimension 3 model of the blowup of $\Gamma$ as $\sigma'' \colon Y''\subset X\times\PP(1,1,1,2) \to X$ given by the the Pfaffians of the matrix
\[ \begin{pmatrix}
\zeta & \xi_2 & \xi_1 + d\eta & -e\eta \\
& -(ax+by)\eta & y(\xi_2 + c\eta) & \xi_1 \\
& & z & y \\
& & & x
\end{pmatrix} \]
The variety $Y''$ is \emph{not} the divisorial extraction since $\sigma''$ is not small. A new unprojection plane appears after the first unprojection. This plane $\Pi$ is defined by the ideal $(x,y,z,\xi_1)$ and we can see that the matrix is in Tom$_1$ format with respect to this ideal. The central fibre $\sigma''^{-1}(P)$ is given by $\Pi$ together with the line
\[L_1=(x=y=z=\xi_2=\xi_1+d\eta=0).\]
Unprojecting $\Pi$ gives a new variable $\theta$ of weight three with four additional equations
\begin{align*}
x\theta &= (\zeta + be\eta^2)(\xi_1+d\eta) + e\xi_2(\xi_2+c\eta)\eta \\
y\theta &= \xi_2\zeta - ae(\xi_1+d\eta)\eta^2 \\
z\theta &= \xi_2^2(\xi_2+c\eta) + b\xi_2(\xi_1+d\eta)\eta + a(\xi_1+d\eta)^2\eta \\
\xi_1\theta &= \zeta(\zeta + be\eta^2) + ae^2(\xi_2+c\eta)\eta^3
\end{align*}
Generically, the central fibre consists of four lines passing through the point $P_\theta$, the line $L_1$ and the three lines that appear after unprojecting $\Pi$. The open neighbourhood $(P_\theta\in U_\theta)$ is isomorphic to a $\tfrac13(1,1,2)$ singularity. As we assume $a,e\notin\frakm$, when $\eta=1$ we can use the equations to eliminate $x,z,\xi_1,\xi_2$ so that all the points $Q_L=L\cap\{\zeta=0\}$, for $L\subseteq \sigma^{-1}(P)_{\text{red}}$, are smooth.
\subsubsection{Case 2---Jer$_{45}$}\
Now consider instead the case where $b_0=d_0=e_0=0$. In direct analogy to the Tom$_1$ case the reader can check that
\begin{enumerate}
\item $\Gamma$ is a curve of type
\begin{center} \begin{tikzpicture}
\node[label={[label distance=-0.2cm]90:\tiny{$4$}}] at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\draw
(1.075,1)--(2,1);
\end{tikzpicture}, \end{center}
\item after making the first unprojection we get a variety $Y'$ containing a plane $\Pi$ above $P$ defined by the Jer$_{45}$ ideal $(x,y,z,\xi_2)$,
\item $Y'$ has (at most) four nodes along $\Pi$ corresponding to the roots of the orbifold equation $\gamma$,
\item after unprojecting $\Pi$ we get a variety $Y$ with small fibre over $P$, hence $Y$ is the divisorial extraction,
\item the open neighbourhood of the unprojection point $(P_\theta\in U_\theta)$ is isomorphic to the quotient singularity $\frac13(1,1,2)$,
\item $Y$ has at worst $cA$ singularities at the points $Q_L$ according to whether $\gamma$ has repeated roots.
\end{enumerate}
\subsection{An $\bA_3^2$ example} \
Suppose that the general section $P\in\Gamma\subset S\subset X$ is of type $\bA_3^2$ and that $\Gamma$ is a curve whose birational transform on a resolution of $S$ intersects the exceptional divisor with multiplicities
\begin{center} \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$3$}}] at (2,1) {$\bullet$};
\node at (3,1) {$\bullet$};
\draw
(1,1)--(1.925,1)
(2.075,1)--(3,1);
\end{tikzpicture}. \end{center}
Then a terminal extraction from $\Gamma\subset X$ exists.
The calculation is very similar to the Prokhorov--Reid example, except that the first unprojection divisor $\Pi\subset Y'$ is defined by the ideal $I(\phi)=(x,y^2,z)$, so that $\Pi$ is \emph{not reduced}. After unprojecting $\Pi$ we get an index 2 model $Y\subset X\times \PP(1,1,1,2)$ for the divisorial extraction with equations
\[ \begin{pmatrix}
\zeta & \xi_2 & \xi_1 + c\eta & -d\eta \\
& -a\eta & \xi_2 + b\eta & \xi_1 \\
& & z & y^2 \\
& & & x
\end{pmatrix} \]
$\Pi$ is contracted to a singularity of type $cA_1/2$, given by the $\tfrac12$-quotient of the hypersurface singularity
\[ y^2 - \xi_1\xi_2 + ad\eta^2 = 0. \]
\section{Divisorial Extractions from Singular Curves: Types $D$ \& $E$}
The result of the calculations in this section are summed up in the following theorem.
\begin{thm} \label{excthm}
Suppose $P\in \Gamma\subset S\subset X$ as in \S\ref{geconj}.
\begin{enumerate}
\item Suppose that $\Gamma$ is of type $\bD_n^l,\bD_{2k}^r$ or $\bE_7$. Then the divisorial extraction has a codimension 3 model
\[\sigma \colon Y\subset X\times \PP(1,1,1,2) \to X \]
In particular, $Y$ has index 2 and $\bigoplus\sI^{[n]}$ is generated in degrees $\leq 2$.
Moreover, $Y$ is singular along a component line of the central fibre, so there does not exist a terminal extraction from $\Gamma$.
\item Suppose that $\Gamma$ is of type $\bE_6$. We need to consider two cases.
\begin{enumerate}
\item The restriction map $\sI_{\Gamma/X}\to\sI_{\Gamma/S}$ is surjective. Then the divisorial extraction has a codimension 4 model
\[\sigma \colon Y\subset X\times \PP(1,1,1,2,3) \to X \]
In particular, $Y$ has index 3 and $\bigoplus\sI^{[n]}$ is generated in degrees $\leq 3$.
Moreover, if $Y$ is terminal then $\Gamma$ is a curve of type
\begin{center} \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{$1$}}] at (3,1) {$\bullet$};
\node at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (5,1) {$\bullet$};
\node at (3,2) {$\bullet$};
\draw
(1,1)--(2.925,1)
(3.075,1)--(4.925,1)
(3,1.075)--(3,2);
\end{tikzpicture} \end{center}
\item The restriction map $\sI_{\Gamma/X}\to\sI_{\Gamma/S}$ is not surjective. Then the divisorial extraction has a codimension 5 model
\[\sigma \colon Y\subset X\times \PP(1,1,1,2,3,4) \to X \]
In particular, $Y$ has index 4 and $\bigoplus\sI^{[n]}$ is generated in degrees $\leq 4$.
Moreover, if $Y$ is terminal then $\Gamma$ is a curve of type
\begin{center} \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{$1$}}] at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$1$}}] at (4,1) {$\bullet$};
\node at (5,1) {$\circ$};
\node at (3,2) {$\bullet$};
\draw
(1,1)--(2.925,1)
(3.075,1)--(3.925,1)
(4.075,1)--(4.925,1)
(3,1.075)--(3,2);
\end{tikzpicture} \end{center}
In this case, the central fibre $Z\subset Y$ is a union of lines meeting at a $cAx/4$ singularity. The curve marked $\circ$ is pulled out in a partial resolution of $S$.
\end{enumerate}
\end{enumerate}
\end{thm}
Before launching into the proof of this theorem note the following useful remark.
\begin{rmk} \label{msqu}
Suppose the general section $P\in\Gamma\subset S\subset X$ is of type $D$ or $E$. Then we can write the equations of $\Gamma$ as
\[ \bigwedge^2 \begin{pmatrix} \: \phi & \begin{matrix} -g(y,z) \\ h(y,z) \end{matrix} \end{pmatrix} = 0 \]
where $g,h\in \frakm^2\cap I(\phi)$. To see this consider the matrix factorisations in Table \ref{mftab}. Firstly, we can use column operations to cancel any terms involving $x$ from $g,h$. Then to prove $g,h\in\frakm^2$ consider the section $h_{\lambda,\mu}=\eta + \lambda\xi_1 + \mu\xi_2$. The quadratic term of $h_{\lambda,\mu}$ is
\[ h_{\lambda,\mu}^{(2)} = x^2 + \lambda xh^{(1)} + \mu xg^{(1)} + \lambda tg^{(1)} \quad \text{(where $t=y$ or $z$)} \]
and we require this to be a square for all $\lambda,\mu$. This happens only if $g^{(1)}=h^{(1)}=0$.
\end{rmk}
\subsection{The $\bD_n^l,\bD_{2k}^r$ and $\bE_7$ cases} \
These three calculations are essentially all the same. Since they are so similar we only do the $\bD_n^l$ case explicitly.
\subsubsection{The $\bD_n^l$ case}\
According to Lemma \ref{Lem1} and Remark \ref{msqu}, the curve $\Gamma\subset S\subset X$ is defined by the equations
\[ \bigwedge^2 \begin{pmatrix}
x & y^2+z^{n-2} & a(y^2+z^{n-2}) + byz + cz^2 \\
z & x & d(y^2+z^{n-2}) + eyz + fz^2 \\
\end{pmatrix} = 0 \]
for some functions $a,b,c,d,e,f\in\sO_X$. Unprojecting $I(\phi)$ gives a variety
\[ \sigma \colon Y\subset X\times \PP(1,1,1,2) \to X\]
with equations given by the maximal Pfaffians of the matrix
\[ \begin{pmatrix}
\zeta & \xi_2 & \xi_1 - a\eta & (by+cz)\eta \\
& -\xi_1 & -d\eta & \xi_2 + (ey+fz)\eta \\
& & z & y^2+z^{n-2} \\
& & & x
\end{pmatrix}. \]
This $\sigma$ is a small map, so that $Y$ is the divisorial extraction of $\Gamma$. Indeed, the central fibre $Z=\sigma^{-1}(P)_{\text{red}}$ consists of two components meeting at the point $P_\zeta$. These are the lines
\begin{align*}
L_1 &=(x=y=z=\xi_1=\xi_2=0)\\
L_2 &=(x=y=z=\xi_1-a\eta=\xi_2=0)
\end{align*}
Looking at the affine patch $U_\zeta := \{\zeta=1\}\subset Y$ we see that we can eliminate the variables $x,z$ and that $U_\zeta$ is a $\tfrac12$-quotient of the hypersurface singularity
\[ y^2 + z^{n-2} - \xi_2^2 - (e\xi_2+b\xi_1)y\eta - (f\xi_2+c\xi_1)z\eta = 0 \]
where $z= \xi_1^2 - (a\xi_1 + d\xi_2)\eta$.
This hypersurface is singular along the line $L_1$ since this equation is contained in the square of the ideal $(y,\xi_1,\xi_2)$. Therefore $Y$ has nonisolated singularities and cannot be terminal.
\subsection{The $\bE_6$ case} \
Suppose that $\Gamma\subset S\subset X$ is of type $\bE_6$. By Lemma \ref{Lem1} the equations of $\Gamma$ can be written in the form
\[\bigwedge^2\begin{pmatrix}
x & y^2 & -g(y,z) \\
y & x+z^2 & h(y,z) \\
\end{pmatrix}=0\]
where $g,h\in\frakm^2$ by Remark \ref{msqu}. Now consider the general section $H_{\lambda,\mu} = \eta+\lambda\xi_1+\mu\xi_2$. After making the replacement $x \mapsto x+\tfrac12(\lambda h + \mu g)$ the cubic term of $H_{\lambda,\mu}$ is given by
\[ x^2 - y^3 + \lambda yg^{(2)} \]
where $g^{(2)}$ is the 2-jet of $g$. For the general $H_{\lambda,\mu}$ to be of type $E_6$, we require $y(y^2 - \lambda g^{(2)})$ to be a perfect cube for all values of $\lambda$. This happens only if $g^{(2)}$ is a multiple of $y^2$. Therefore we can take $g$ and $h$ to be
\[ g(y,z) = a(y,z)y^2 + b(z)yz^2 + c(z)z^3, \quad h(y,z) = d(y)y^2 + e(y)yz + f(y,z)z^2, \]
for some choice of functions $a,b,c,d,e,f\in\sO_X$. Moreover, $f\not\in\frakm$ else the extraction is not terminal by Lemma \ref{Lem2}.
By specialising these coefficients the curve we are considering varies. After writing down the minimal resolution $\widetilde{S}\to S$ explicitly, one can check that the birational transform of $\Gamma$ is a curve intersecting the exceptional locus of $\widetilde{S}$ with the following multiplicities:
\begin{center}
\begin{tabular}{cm{5cm}cm{5cm}}
Generic & \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{$1$}}] at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$1$}}] at (4,1) {$\bullet$};
\node at (5,1) {$\bullet$};
\node at (3,2) {$\bullet$};
\draw
(1,1)--(2.925,1)
(3.075,1)--(3.925,1)
(4.075,1)--(4.925,1)
(3,1.075)--(3,2);
\end{tikzpicture} & $c\in\frakm$ & \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node[label={[label distance=-0.2cm]80:\tiny{$1$}}] at (3,1) {$\bullet$};
\node at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (5,1) {$\bullet$};
\node at (3,2) {$\bullet$};
\draw
(1,1)--(2.925,1)
(3.075,1)--(4.925,1)
(3,1.075)--(3,2);
\end{tikzpicture} \\
$a\in\frakm$ & \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$1$}}] at (4,1) {$\bullet$};
\node at (5,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (3,2) {$\bullet$};
\draw
(1,1)--(3.925,1)
(4.075,1)--(4.925,1)
(3,1)--(3,1.925);
\end{tikzpicture} & $a,c\in\frakm$ & \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node at (3,1) {$\bullet$};
\node at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (5,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (3,2) {$\bullet$};
\draw
(1,1)--(4.925,1)
(3,1)--(3,1.925);
\end{tikzpicture} \\
& & $a+f,c\in\frakm$ & \begin{tikzpicture}
\node at (1,1) {$\bullet$};
\node at (2,1) {$\bullet$};
\node at (3,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$2$}}] at (4,1) {$\bullet$};
\node[label={[label distance=-0.2cm]90:\tiny{$1$}}] at (5,1) {$\bullet$};
\node at (3,2) {$\bullet$};
\draw
(1,1)--(3.925,1)
(4.075,1)--(4.925,1)
(3,1)--(3,2);
\end{tikzpicture}. \\
\end{tabular}
\end{center}
The first unprojection $\sigma'\colon Y' \to X$ is defined by the maximal Pfaffians of the matrix
\begin{equation}\begin{pmatrix}
\zeta & \xi_2 & y(\xi_1+a\eta) & -(by+cz)\eta \\
& \xi_1 & \xi_2+(dy+ez)\eta & \xi_1-f\eta \\
& & z^2 & y \\
& & & x
\end{pmatrix} \label{e6eqns}\tag{$\dagger$}\end{equation}
and this $Y'$ contains a new unprojection divisor defined by an ideal $I$ in Tom$_2$ format. If the coefficient $c$ is assumed to be chosen generally then $I=(x,y,z,\xi_2)$. However, if we make the specialisation $c\in\frakm$, we can take $I$ to be the smaller ideal $(x,y,z^2,\xi_2)$. Unprojecting these two ideals gives very different varieties.
\subsubsection{The special $\bE_6$ case: $c\in\frakm$}\
Since it is easier, consider first the case when $c\in\frakm$, i.e.\ we let $c(z)=c'(z)z$. Unprojecting $(x,y,z^2,\xi_2)$ gives a codimension 4 model,
\[ \sigma \colon Y \subset X\times \PP(1,1,1,2,3) \to X \]
defined by the five Pfaffians above \eqref{e6eqns}, plus four additional equations:
\begin{align*}
x\theta &= (\xi_1+a\eta)(\xi_1-f\eta)^2 + b(\xi_1-f\eta)(\xi_2+(dy+ez)\eta)\eta +c'(\xi_2+(dy+ez)\eta)^2\eta, \\
y\theta &= \zeta(\xi_1-f\eta) + c'\xi_1(\xi_2+(dy+ez)\eta)\eta, \\
z^2\theta &= (\zeta-b\xi_1\eta)(\xi_2+(dy+ez)\eta) - \xi_1(\xi_1+a\eta)(\xi_1-f\eta), \\
\xi_2\theta &= \zeta(\zeta - b\xi_1\eta) + c'\xi_1^2(\xi_1+a\eta)\eta.
\end{align*}
The central fibre $Z$ is a union of three lines meeting at the unprojection point $P_\theta$, so that $Y$ is the divisorial extraction of $\Gamma$. These three lines are given by $x,y,z,\xi_2=0$ and
\begin{center}
\begin{tikzpicture}
\node at (0,1.2) {$L_1$};
\node at (0,0.6) {$L_2$};
\node at (0,0) {$L_3$};
\draw [decorate,decoration={brace,amplitude=3pt},xshift=-4pt,yshift=0pt]
(0.5,1.2) -- (0.5,0.6);
\node at (5,0.9) {$\xi_1-f\eta=\zeta^2 - bf\zeta\eta^2 + c'f^2(a+f)\eta^4=0$};
\node at (5,0) {$\xi_1+a\eta=\zeta=0$};
\end{tikzpicture}
\end{center}
In the open neighbourhood $P_\theta\in U_\theta$ we can eliminate $x,y,\xi_2$ by the equations involving $\theta$ above. We are left with a $\tfrac13$-quotient of the hypersurface singularity
\[ H : z^2 = (\zeta-b\xi_1\eta)(\xi_2 + (dy+ez)\eta) - \xi_1(\xi_1+a\eta)(\xi_1-f\eta) \]
If $H$ is not isolated then $Y$ will have nonisolated singularities and there will be no terminal extraction from $\Gamma$. This happens if either $a\in\frakm$ or $a+f\in\frakm$. If $a\in\frakm$ then $H$ becomes singular along $L_3$. If $a+f\in\frakm$ then one of $L_1,L_2$ satisfies $\zeta - bf\eta^2=0$ and $H$ becomes singular along this line.
Now we can assume that $a,a+f,f\not\in\frakm$, and consider the (general) hyperplane section $\eta=0$, to see that $P_\theta\in U_\theta$ is the $cD_4/3$ point
\[ \big( z^2 - \zeta^3 + \xi_1^3 + \eta(\cdots) = 0 \big) \: / \: \tfrac13(0,2,1,1;0). \]
\subsubsection{The general $\bE_6$ case: $c\not\in\frakm$}\
Now consider the more general case where $c$ is invertible. The difference between this and the last case is the existence of a form $\theta'$, vanishing three times on $\Gamma\subset S$, which fails to lift to $X$.
We need to make two unprojections in order to construct the divisorial extraction $Y$. The first unprojection divisor defined by the Tom$_2$ ideal $(x,y,z,\xi_2)$ as described above. Then a new divisor appears defined by the ideal $\big(x,y,z,\xi_2,\xi_1(\xi_1+a\eta)\big)$. We add two new variables $\theta,\kappa$ of degrees 3,4 (resp.) to our ring and we end up with a variety in codimension 5
\[ \sigma \colon Y\subset X\times \PP(1,1,1,2,3,4) \to X. \]
The equations of $Y$ are given by the five equations \eqref{e6eqns} and nine new unprojection equations: four involving $\theta$ and five involving $\kappa$. The important equation is
\[ \xi_1(\xi_1+a\eta)\kappa = \zeta(\zeta - b\xi_1\eta)^2 - \theta(\theta - cd\xi_1\eta^2) + e\theta(\zeta-b\xi_1\eta)\eta + d\zeta(\xi_1-f\eta)(\zeta-b\xi_1\eta)\eta. \]
The open set of the unprojection point $P_\kappa\in U_\kappa$ is a hyperquotient point
\[ \xi_1^2 + \theta^2 - \zeta^3 + \eta(\cdots) = 0 \big) \: / \: \tfrac14(1,2,3,1;2), \]
which is the equation of a $cAx/4$ singularity. Moreover, one can check that this singularity is not isolated if $a\in\frakm$. Therefore, if $Y$ is terminal then $a\not\in\frakm$ and $\Gamma$ is as described in Theorem \ref{excthm}.
The central fibre of this extraction consists of (one or) two rational curves. One of these curves is pulled out in a partial resolution of $S$.
\subsection{The $\bD_{2k+1}^r$ case}\
This is certainly the most complicated of the exceptional cases and I intend to treat it fully in another paper, however some calculations predict that it should be similar to the $\bE_6$ case in the following sense.
In general the restriction map $\sI_{\Gamma/X}\to \sI_{\Gamma/S}$ is not surjective, although after specialising some coefficients there is a good case where it does become surjective.
If the map is not surjective then the divisorial extraction $\sigma\colon Y\to X$ pulls one or more curves out of $S$, so that pulling back to $S_Y\subset Y$ gives a partial crepant resolution $\sigma\colon S_Y\to S$.
In the good case we can unproject just three divisors, given by the chain of ideals
\[ (x,y,z^k), \quad (x,y,z,\xi_2), \quad (x,y,z,\xi_2,\xi_1^2) \]
to get a codimension 5 model $Y\subset X\times \PP(1,1,1,2,3,4)$ of index 4. The restriction to $\sigma\colon S_Y\to S$ is an isomorphism and, if it is isolated, the last unprojection point $Q\in S_Y\subset Y$ is a singularity of type $cAx/4$. | {"config": "arxiv", "file": "1403.7614.tex"} |
\section{Numerical reconstruction}
In this section we show numerical reconstructions of $p$ and $q$ from
boundary flux data measurements following the algorithm described in the proof
of Theorem~\ref{thm:uniqueness}.
In keeping with a practical situation,
truncated time-value measurements are taken over a finite interval --
in this case $[0,T]$ is used with $T=1$.
We remark that this is actually a long time period as the traditional scaling
of the parabolic equation to unit coefficients means that the diffusion
coefficient $d$ is absorbed into the time variable and our value of $T$
represents the product of the actual final time of measurement and the value
of $d$.
In fact, $d$ is itself the ratio of the conductivity and specific heat.
Values of $d$ of course vary widely with the material but metals for example
have a range of around $10^{-4}$ to $10^{-5}$meters${^2}$/second.
\subsection{Iterative scheme}
For $(\cos{\theta_\ell},\sin{\theta_\ell})\in \partial\Omega$, from
Corollary \ref{data_formula} and the convergence result
Lemma \ref{convergence}, we have the following flux representation
using termwise differentiation,
\begin{equation}\label{eqn:sol_rep_recon}
\frac{\partial u}{\n}(1,\theta_\ell,t)=-\sum_{n=1}^\infty
a_n(z_\ell)\lambda_n p_n \int_0^t e^{-\lambda_n(t-\tau)}q(\tau)\ d\tau
\end{equation}
where we have again used polar coordinates.
Since the unknown function $p$ is represented by its Fourier
coefficients $\{p_n\}$, we consider to reconstruct $(p,q)$ in the
space $\S_N\times L^2[0,T]$, where
\begin{equation*}
\S_N=\s\{\varphi_n(x):n=1,\cdots,N\}.
\end{equation*}
We define the forward operator $F$ as
\vskip-25pt
\begin{equation*}
\qquad\qquad\qquad\qquad
F(p,q)=
\begin{bmatrix}
\frac{\partial u}{\n}(1,\theta_1,t)\\\\
\frac{\partial u}{\n}(1,\theta_2,t)
\end{bmatrix}
\end{equation*}
and build an iteration scheme to solve
\vskip-25pt
\begin{equation*}
\qquad\qquad\qquad\qquad
F(p,q)= g^\delta(t):=\begin{bmatrix}
g_1^\delta(t)\\\\
g_2^\delta(t)
\end{bmatrix}.
\end{equation*}
Here $g^\delta$ is the perturbed measurement satisfying
$\|(g^\delta-g)/g\|_{C[0,T]}\le \delta$.
Clearly, if either of $p(x)$ and $q(t)$ is fixed, the operator
$F$ is linear.
Consequently, we can construct the sequential iteration scheme
using Tikhonov regularization as
\begin{equation}\label{iteration}
\begin{aligned}
p_{j+1}:=&\argmin_{p\in \S_N} \|F[q_j]p-g^\delta\|_{L^2(\Omega)}^2+\beta_p\|p\|_{L^2(\Omega)}^2,\\
q_{j+1}:=&\argmin_{q\in L^2[0,T]}\|F[p_j]q-g^\delta\|_{L^2[0,T]}^2+\beta_q\|\triangledown q\|_{L^1[0,T]}.
\end{aligned}
\end{equation}
In the case of $\{q_j\}$, we choose the total variation regularization
\cite{MuellerSiltanen:2012} to make sure each $q_j$ saves the
edge-preserving property to fit the exact
solution $q(t)$, which is a step function. $\beta_p, \beta_q$ are the
regularizing parameters.
\subsection{Regularization strategies}
In equation \eqref{eqn:sol_rep_recon} by necessity any use of
this from a numerical standpoint must truncate to a finite sum.
One might be tempted to use ``as many eigenfunctions as possible''
but there are clearly limits imposed by the data measurement process.
Two of these will be discussed in this section.
We will measure the flux at the points $\theta_\ell$ at a series of time steps.
If these steps are $\delta t$ apart, then the exponential term
$e^{-\lambda_n t}$ with $n=N$, the maximum eigenvalue index used,
is a limiting factor: as a multiplier if $e^{-\lambda_N \delta t}$ is
too small relative to the effects caused by any assumed noise in the data,
then we must either reduce $\delta t$ or decrease $N$.
In short, high frequency information can only be obtained from information
arising from very short time measurements.
We also noted that the selection of measurement points $\{\theta_\ell\}$
should be made to avoid zeros of eigenfunctions on the boundary
as otherwise the information coming from these eigenfunctions is unusable.
From the above paragraph, it is clear that only a relatively small number $N$
of these are usable in any event so that we are in fact far from
restricted in any probabilistic sense from selecting the difference in
measurement points even assuming these are all rational numbers when divided
by $\pi$. We can take $\theta =0$ to be the origin of the system without any
loss of generality so that
$\varphi_n(r,\theta)=\omega_n J_m(\sqrt{\lambda_n}r)\{\cos m\theta,\sin m\theta\}$.
If two points at angles $\theta_1$ and $\theta_2$ are taken then
the difference between them is the critical factor; we need to ensure that
$k(\theta_1-\theta_2) \neq j\pi$ for any integers $j,k$.
Of course the points whose angular difference is a rational number times $\pi$
form a dense set so at face value this might seem a mathematical,
but certainly not a practical, condition.
However, from the above argument,
we cannot use but a relatively small number of eigenfunctions
and so the set of points $(\theta_1,\theta_2)$ with
$\theta_1-\theta_2\ne(j/k)\pi$
for sufficiently small $k$ might have distinct intervals of sufficient
length for this criteria to be quite practical.
To see this,
consider the rational points generated modulo $\pi$ with denominator
less than the prime value $29$, that is, we are looking for rational
numbers in lowest form $a/b$ with $b<29$ and checking for zeros of
$\sin(a\pi/b)$ for a given $b$.
Clearly taking $b=4$ gives a zero at $\theta=\pi/4$ and we must check
those combinations $a/b$ that would provide a zero close to but less
than $1/4$.
We need only check primes $b$ in the range $2<b<29$ and the fraction closest
to $1/4$ occurs at $a/b=4/17$ which is approximately $0.235$.
Thus the interval that is zero free under this range of $b$ has length
$0.015\pi$ radians or approximately $2.7$ degrees of arc length.
Similar intervals occur at several points throughout the circle.
The gaps in such a situation with $b<29$ is shown in Figure~\ref{gaps}.
\begin{figure}[hp!]
\center
\begin{subfigure}
\centering
\includegraphics[trim = .9cm .5cm .5cm 4cm, clip=true,height=3.0cm,width=12cm]
{gaps.jpg}
\end{subfigure}
\caption{\small Gaps between angles.}
\label{gaps}
\end{figure}
Now the question is: if we restrict the eigenvalue index $k$ to be less
than $29$ what range of $m$ index to we obtain and what is the lowest
eigenvalue that exceeds this $k$-range?
Since the $m$-index grows faster than the $k$ for a given eigenvalue index,
we obtain several thousand eigenvalues,
the largest being approximately $3.5\times 10^4$.
Only with exceedingly small initial time steps we could get such an eigenvalue
and its attendant eigenfunction be utilized in the computations.
If we restrict $k<17$ then the zero-free interval becomes $(\pi/4,4\pi/13)$
with length approximately $10.4$ degrees and
the largest eigenvalue obtained is about $1.5\times 10^4$.
If we decrease down to $k\leq 10$ we get
an angle range of $15.8$ degrees in which to work.
Thus in short, the ill-conditioning of the problem is substantially
due to other factors and not to impossible restrictions on the choice of
observation points $\{\theta_\ell\}$.
\subsection{Numerical experiments}
First we consider the experiment $(e1)$,
\begin{equation*}
\begin{aligned}
(e1):\quad &T=1,
\ \theta_1=0,\ \theta_2=\frac{13}{32}\pi,\\
&p(r,\theta)= \frac{5}{\sqrt{30}}\omega_1J_{m(1)}(\sqrt{\lambda_1}r)\cos{(m(1)\theta)}+
\frac{2}{\sqrt{30}}\omega_2J_{m(2)}(\sqrt{\lambda_2}r)\cos{(m(2)\theta)}\\
&\qquad\quad\quad+\frac{1}{\sqrt{30}}\omega_2J_{m(2)}(\sqrt{\lambda_2}r)\sin{(m(2)\theta)},\\
&q(t)=\chi_{_{[0,1/3)}}+2\chi_{_{[1/3,2/3)}}+1.5\chi_{_{[2/3,1]}}.
\end{aligned}
\end{equation*}
We use noise-polluted flux measurements on the boundary points at noise
levels ranging from 1\% to 5\% and choose the time measurement step $\delta t$
to be $0.01$.
In order to avoid the loss of accuracy caused by the multiplication between
$p$ and $q$, we use the normalized exact solution of $p(x)$, namely,
let $\|p\|_{L^2(\Omega)}=1$. To achieve this setting, in the programming
of iteration \eqref{iteration}, after each iterative step, we set
$p_j=p_j/\|p_j\|_{L^2(\Omega)},\ q_j=\|p_j\|_{L^2(\Omega)}q_j$.
Also, the initial guess $p_0$ and $q_0$ are set as
\begin{equation*}
\begin{aligned}
p_0(x)&\equiv 1,\ x\in \Omega,\\
q_0&:=\argmin_{q\in L^2[0,T]}\|F[p_0]q-g^\delta\|_{L^2[0,T]}^2+\beta_q\|\triangledown q\|_{L^1[0,T]}.
\end{aligned}
\end{equation*}
Depending on the noise level $\delta,$ the values of regularized
parameters $\beta_p$, $\beta_q$ are
picked empirically and here the values used are
$\beta_p=1\times10^{-2}$, $\beta_q=8\times10^{-4}$.
After $j=10$ iterations, the approximations $p_j,q_j$ are recorded and
displayed by Figure \ref{e1_pq_1}.
This indicates effective numerical convergence of the scheme.
The errors of approximations
upon different noise levels are displayed by the following table.
\begin{center}
\begin{tabular}{|c| c| c|c| }
\hline
& $\delta=1\%$ & $\delta=3\%$ &$\delta=5\%$\\
\hline $\|p-p_j\|_{L^2(\Omega)}$ & $1.34e-1$ & $1.76e-1$& $1.87e-1$\\
\hline $\|q-q_j\|_{L^2[0,T]}$ & $8.08e-2$ & $8.25e-2$& $9.76e-2$\\
\hline
\end{tabular}
\label{error}
\end{center}
The satisfactory reconstructions shown by the table confirm that
the iterative scheme \eqref{iteration} is a feasible approach to
solve this nonlinear inverse problem numerically.
\setlength{\linewidth}{6.2true in}
\bigskip
\input q_boxes
\newbox\figuretwo
\setbox\figuretwo=\vbox{\hsize=0.7\linewidth\parindent=0pt
\noindent\includegraphics[trim={7cm 7cm 5cm
0cm},clip=true,scale=0.17]{e1_p_1.jpg}
}
\begin{figure}[ht]
\centering
\hbox to\hsize{\copy\figuretwo\hss\qquad\raise3em\copy\figureone}
\caption{\small Experiment $(e1)$, $p$ (left), $p_j$ (center) and
$q,\,q_j$ (right). Noise $\delta=1\%$. }
\label{e1_pq_1}
\end{figure}
Next, we seek recovery of a more general $p(x)$:
\begin{equation*}
\begin{aligned}
(e2):\quad
&p(r,\theta)=\chi_{{}_{r\le 0.5+0.2\cos{2\theta}}} ,\\
&q(t)=\chi_{_{[0,1/3)}}+2\chi_{_{[1/3,2/3)}}+1.5\chi_{_{[2/3,1]}},\\
(e3):\quad
&p(r,\theta)=\chi_{{}_{r\le 0.25+0.1\cos{2\theta}}} ,\\
&q(t)=\chi_{_{[0,1/3)}}+2\chi_{_{[1/3,2/3)}}+1.5\chi_{_{[2/3,1]}}.
\end{aligned}
\end{equation*}
In experiment $(e2)$, a discontinuous, star-like supported exact solution
$p(x)$ is considered, where the radius function is
$r(\theta)=0.5+0.2\cos{2\theta}$. We can see this $p$ is out of
Assumption \ref{assumption}, so the iteration \eqref{iteration} may not
be appropriate here and in fact, we use the
Levenberg--Marquardt algorithm to recover the radius function $r(\theta)$,
see \cite{RundellZhang:2017JCP} for details.
The numerical results are presented in Figures \ref{e2}, \ref{e2_4point}
and \ref{e2_4point_k200}, in which the blue dotted line
and the red dashed line mean the boundaries of $\text{supp}(p)$ and
$\text{supp}(p_j)$, respectively, and the black bullets are the locations
of observation points.
Figures~\ref{e2_4point} and \ref{e2_4point_k200} show that with sufficient data, for example, more
measurement points and finer mesh on time $t$, precise reconstructions
can be obtained even though Assumption~\ref{assumption} is violated.
These results indicate that, if we do not pursue the global uniqueness stated by
Theorem~\ref{thm:uniqueness},
which requires Assumption~\ref{assumption}, the conditions on
$p$ and $q$ may be weakened in numerical computations.
This inspires future work on such inverse source problems in order to
provide a rigorous mathematical justification for allowing such inclusions.
\begin{figure}[ht]
\center
\begin{subfigure}
\centering
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6cm]
{e2_p.jpg}
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6.5cm]
{e2_q.jpg}
\end{subfigure}
\caption{\small Experiment $(e2)$, $p$ (left) and $q$ (right),
$\delta=1\%$.}
\label{e2}
\end{figure}
\begin{figure}[ht]
\center
\begin{subfigure}
\centering
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6cm]
{e2_p_4point.jpg}
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6.5cm]
{e2_q_4point.jpg}
\end{subfigure}
\caption{\small Experiment $(e2)$ with $4$ measurement points, $p$ (left) and $q$ (right),
$\delta=1\%$.}
\label{e2_4point}
\end{figure}
\begin{figure}[ht]
\center
\begin{subfigure}
\centering
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6cm]
{e2_p_4point_k200.jpg}
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6.5cm]
{e2_q_4point_k200.jpg}
\end{subfigure}
\caption{\small Experiment $(e2)$ with $4$ measurement points and
$\delta t=5\times10^{-3}$, $p$ (left) and $q$ (right),
$\delta=1\%$.}
\label{e2_4point_k200}
\end{figure}
If we use equation \eqref{eqn:direct_pde} to describe the diffusion of
pollutants, then $\text{supp}(p)$ means the severely polluted area. With the
consideration of safety and cost, observations of the flux data
should be made as far as possible to $\text{supp}(p)$.
This is the reason
why we set the experiment $(e3)$, in which $p(x)$ has a smaller support.
Due to the long distance between $\text{supp}(p)$ and the observation
points, worse results can be expected. See Figure \ref{e3}.
Hence, accurate and efficient algorithms for this inverse source problem
with a small $\text{supp}(p)$ are worthy of investigation.
Of course, in the limit that these become point sources described by
Dirac-delta functions then other tools are available.
See, for example, \cite{HankeRundell:2011}.
\begin{figure}[ht]
\center
\begin{subfigure}
\centering
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6cm]
{e3_p.jpg}
\includegraphics[trim = .5cm 2cm .5cm 2cm, clip=true,height=5.5cm,width=6.5cm]
{e3_q.jpg}
\end{subfigure}
\caption{\small Experiment $(e3)$, $p$ (left) and $q$ (right),
$\delta=1\%$.}
\label{e3}
\end{figure} | {"config": "arxiv", "file": "1908.02015/numerical.tex"} |
TITLE: Prove that $ k[x_1,\ldots,x_4]/ \langle x_1x_2,x_2x_3,x_3x_4,x_4x_1 \rangle$ is not Cohen-Macaulay.
QUESTION [2 upvotes]: Prove that $ k[x_1,\ldots,x_4]/ \langle x_1x_2,x_2x_3,x_3x_4,x_4x_1 \rangle$ is not Cohen-Macaulay.
We have $\langle x_1x_2,x_2x_3,x_3x_4,x_4x_1 \rangle=\langle x_1,x_3 \rangle \cap \langle x_2,x_4\rangle$. Therefore $\dim k[x_1,\ldots,x_4]/ \langle x_1x_2,x_2x_3,x_3x_4,x_4x_1 \rangle=2$.
How do I prove $ k[x_1,\ldots,x_4]/ \langle x_1x_2,x_2x_3,x_3x_4,x_4x_1 \rangle$ is not Cohen-Macaulay?
REPLY [3 votes]: I believe this is the idea: Look in the local ring at the origin. Mod out by $x_2-x_1$. The quotient ring is now the localization of $k[x_2,x_3,x_4]/(x_2^2,x_2x_3,x_3x_4,x_2x_4)$, and everybody in the maximal ideal is now a zero-divisor. So the depth at the origin is only $1$. | {"set_name": "stack_exchange", "score": 2, "question_id": 1413766} |
TITLE: Problem about the similarity between triangles
QUESTION [0 upvotes]: Referring to the following image:
how can it be demonstrated that $AB : AC = AF : AE$ ?
The only theorem that seems to me to be useful is the theorem of the secante and of the tangent: $AE : DE = DE : BE $ or equally: $AF : DF = DF : CF$, but I can not conclude anything.
Some idea?
REPLY [0 votes]: If you are familiar with the inversion, then perform it at $A$ and $r=AD$. Then the circle goes to tangent at $D$ and so $D\mapsto D$, $B\mapsto E$ and $C\mapsto F$. So we have $$AB\cdot AE = AD^2 = AC\cdot AF$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2417363} |
\begin{document}
\begin{center}{\bf\Large Boundary value problem with fractional p-Laplacian operator}\medskip
\bigskip
\bigskip
{C\'esar Torres}
Departamento de Matem\'aticas\\
Universidad Nacional de Trujillo\\
Av. Juan Pablo Segundo s/n, Trujillo-Per\'u
{\sl (ctl\_576@yahoo.es, ctorres@dim.uchile.cl)}
\end{center}
\medskip
\medskip
\medskip
\medskip
\medskip
\begin{abstract}
The aim of this paper is to obtain the existence of solution for the fractional p-Laplacian Dirichlet problem with mixed derivatives
\begin{eqnarray*}
&{_{t}}D_{T}^{\alpha}\left(|_{0}D_{t}^{\alpha}u(t))|^{p-2}{_{0}}D_{t}^{\alpha}u(t)\right) = f(t,u(t)), \;t\in [0,T],\\
&u(0) = u(T) = 0,
\end{eqnarray*}
where $\frac{1}{p} < \alpha <1$, $1<p<\infty$ and $f:[0,T]\times \mathbb{R} \to \mathbb{R}$ is a Carath\'eodory function wich satisfies some growth conditions. We obtain the existence of nontrivial solution by using the Mountain Pass Theorem.
\noindent
{\bf Key words:} Fractional calculus, mixed fractional derivatives, boundary value problem, p-Laplacian operator, mountain pass thoerem
{\bf MSC}
\end{abstract}
\date{}
\setcounter{equation}{0}
\section{ Introduction}
Recently, a great attention has been focused on the study of boundary value problems (BVP) for fractional differential equations. They appear in mathematical models in different branches in Science as physics, chemistry, biology, geology, as well as, control theory, signal theory, nanoscience and so on \cite{DBZGJM, AKHSJT, IP, JSOAJTM, SSAKOM, YZ} and references therein.
Physical models containing left and right fractional differential operators have recently renewed attention from scientists which is mainly due to applications as models for physical phenomena exhibiting anomalous diffusion. Specifically, the models involving a fractional differential oscillator equation, which contains a composition of left and right fractional derivatives, are proposed for the description of the processes of emptying the silo \cite{SLTB} and the heat flow through a bulkhead filled with granular material \cite{ES}, respectively. Their studies show that the proposed models based on fractional calculus are efficient and describe well the processes.
The existence and multiplicity of solutions for BVP for nonlinear fractional differential equations is extensively studied using various tools of nonlinear analysis as fixed point theorems, degree theory and the method of upper and lower solutions \cite{MBJNRR, MBACDS}. Very recently, it should be noted that critical point theory and variational methods have also turned out to be very effective tools in determining the existence of solutions of BVP for fractional differential equations. The idea behind them is trying to find solutions of a given boundary value problem by looking for critical points of a suitable energy functional defined on an appropriate function space. In the last 30 years, the critical point theory has become a wonderful tool in studying the existence of solutions to differential equations with variational structures, we refer the reader to the books due to Mawhin and Willem \cite{JMMW}, Rabinowitz \cite{PR}, Schechter \cite{MS} and papers \cite{VEJR, FJYZ0, FJYZ, CT, CT1, CT2, WXJXZL, YZ}.
The p-Laplacian operator was considered in several recent works. It arises in the modelling of different physical and natural phenomena; non-Newtonian mechanics, nonlinear elasticity and glaciology, combustion theory, population biology, nonlinear flow laws, system of Monge-Kantorovich partial differential equations. There exists a very large number of papers devoted to the existence of solutions of the p-Laplacian operator in which the authors used bifurcation, variational methods, sub-super solutions, degree theory, in order to prove the existence of solutions of this nonlinear operator, for detail see \cite{GDPJJM}.
Motivated by these previous works, we consider the solvability of the Dirichlet problem with mixed fractional derivatives
\begin{eqnarray}\label{I01}
&{_{t}}D_{T}^{\alpha}\left(|_{0}D_{t}^{\alpha}u(t))|^{p-2}{_{0}}D_{t}^{\alpha}u(t)\right) = f(t,u(t)), \;t\in [0,T],\nonumber\\
&u(0) = u(T) = 0,
\end{eqnarray}
where $1<p<\infty$, $\frac{1}{p}< \alpha < 1$ and we assume that $f: [0,T]\times \mathbb{R} \to \mathbb{R}$ is a Carath\'eodory function satisfying:
\begin{itemize}
\item[\fbox{$f_1$}] There exists $C>0$ and $1<q<\infty$, such that
$$
|f(t,\xi)| \leq C(1 + |\xi|^{q-1})\;\;\mbox{such that for a.e.}\;t\in [0,T],\;\xi \in \mathbb{R}
$$
\item[\fbox{$f_2$}] There exists $\mu >p$ and $r>0$ such that for a.e. $t\in [0,T]$ and $r\in \mathbb{R}$, $|\xi| \geq r$
$$
0< \mu F(t,\xi) \leq \xi f(t,\xi),
$$
where $F(t,\xi) = \int_{0}^{\xi}f(t,\sigma)d\sigma$.
\item[\fbox{$f_3$}] $\lim_{\xi \to 0} \frac{f(t,\xi)}{|\xi|^{p-1}} = 0$ uniformly for a.e. $t\in [0,T]$.
\end{itemize}
We say that $u\in E_{0}^{\alpha ,p}$ is a weak solution of problem (\ref{I01}), if
$$
\int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p-2}{_{0}}D_{t}^{\alpha}u(t) {_{0}}D_{t}^{\alpha} \varphi (t)dt = \int_{0}^{T} f(t,u(t))\varphi (t)dt,
$$
for any $\varphi \in E_{0}^{\alpha,p}$, where space $E_{0}^{\alpha ,p}$ will be introduced in Section $\S$ 2.
Let $I: E_{0}^{\alpha, p} \to \mathbb{R}$ the functional associated to (\ref{I01}), defined by
\begin{equation}\label{I02}
I(u) = \frac{1}{p}\int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p}dt - \int_{\mathbb{R}}F(t,u(t))dt
\end{equation}
under our assumption $I\in C^1$ and we have
\begin{equation}\label{I03}
I'(u)v = \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p-2}{_{0}}D_{t}^{\alpha}u(t){_{0}}D_{t}^{\alpha}v(t)dt - \int_{0}^{T}f(t,u(t))v(t)dt.
\end{equation}
Moreover critical points of $I$ are weak solutions of problem (\ref{I01}).
Using the Mountain pass Theorem, we get our main result.
\begin{Thm}\label{main}
Suppose that $f$ satisfies $(f_1) - (f_3)$. If $p<q< \infty$ then the problem (\ref{I01}) has a nontrivial weak solution in $E_{0}^{\alpha ,p}$.
\end{Thm}
The rest of the paper is organized as follows: In Section \S 2 we present preliminaries on
fractional calculus and we introduce the functional setting of the problem. In Section \S 3 we prove Theorem \ref{main}.
\section{Fractional Calculus}
In this section we introduce some basic definitions of fractional calculus which are used further in this paper. For the proof see \cite{AKHSJT, IP, SSAKOM}.
Let $u$ be a function defined on $[a,b]$. The left (right ) Riemann-Liouville fractional integral of order $\alpha >0$ for function $u$ is defined by
\begin{eqnarray*}
&_{a}I_{t}^{\alpha}u(t) = \frac{1}{\Gamma (\alpha)}\int_{a}^{t} (t-s)^{\alpha - 1}u(s)ds,\;t\in [a,b],\\
&_{t}I_{b}^{\alpha}u(t) = \frac{1}{\Gamma (\alpha)}\int_{t}^{b}(s-t)^{\alpha -1}u(s)ds,\;t\in [a,b],
\end{eqnarray*}
provided in both cases that the right-hand side is pointwise defined on $[a,b]$.
The left and right Riemann - Liouville fractional derivatives of order $\alpha >0$ for function $u$ denoted by $_{a}D_{t}^{\alpha}u(t)$ and $_{t}D_{b}^{\alpha}u(t)$, respectively, are defined by
\begin{eqnarray*}
&_{a}D_{t}^{\alpha}u(t) = \frac{d^{n}}{dt^{n}}{_{a}}I_{t}^{n-\alpha}u(t),\\
&_{t}D_{b}^{\alpha}u(t) = (-1)^{n}\frac{d^{n}}{dt^{n}}{ _{t}}I_{b}^{n-\alpha}u(t),
\end{eqnarray*}
where $t\in [a,b]$, $n-1 \leq \alpha < n$ and $n\in \mathbb{N}$.
The left and right Caputo fractional derivatives are defined via the above Riemann-Liouville fractional derivatives \cite{AKHSJT}. In particular, they are defined for the function belonging to the space of absolutely continuous function, namely, If $\alpha \in (n-1,n)$ and $u\in AC^{n}[a,b]$, then the left and right Caputo fractional derivative of order $\alpha$ for function $u$ denoted by $_{a}^{c}D_{t}^{\alpha}u(t)$ and $_{t}^{c}D_{b}^{\alpha}u(t)$ respectively, are defined by
\begin{eqnarray*}
&& _{a}^{c}D_{t}^{\alpha}u(t) = {_{a}}I_{t}^{n-\alpha}u^{(n)}(t) = \frac{1}{\Gamma (n-\alpha)}\int_{a}^{t} (t-s)^{n-\alpha -1}u^{n}(s)ds,\\
&&_{t}^{c}D_{b}^{\alpha}u(t) = (-1)^{n} {_{t}}I_{b}^{n-\alpha}u^{(n)}(t) = \frac{(-1)^{n}}{\Gamma (n-\alpha)}\int_{t}^{b} (s-t)^{n-\alpha-1}u^{(n)}(s)ds.
\end{eqnarray*}
The Riemann-Liouville fractional derivative and the Caputo fractional derivative are connected with each other by the following relations
\begin{Thm}\label{RL-C}
Let $n \in \mathbb{N}$ and $n-1 < \alpha < n$. If $u$ is a function defined on $[a,b]$ for which the Caputo fractional derivatives $_{a}^{c}D_{t}^{\alpha}u(t)$ and $_{t}^{c}D_{b}^{\alpha}u(t)$ of order $\alpha$ exists together with the Riemann-Liouville fractional derivatives $_{a}D_{t}^{\alpha}u(t)$ and $_{t}D_{b}^{\alpha}u(t)$, then
\begin{eqnarray*}
_{a}^{c}D_{t}^{\alpha}u(t) & = & _{a}D_{t}^{\alpha}u(t) - \sum_{k=0}^{n-1} \frac{u^{(k)}(a)}{\Gamma (k-\alpha + 1)} (t-a)^{k-\alpha}, \quad t\in [a,b], \\
_{t}^{c}D_{b}^{\alpha}u(t) & = & _{t}D_{b}^{\alpha}u(t) - \sum_{k=0}^{n-1} \frac{u^{(k)}(b)}{\Gamma (k-\alpha + 1)} (b-t)^{k-\alpha},\quad t\in [a,b].
\end{eqnarray*}
In particular, when $0<\alpha < 1$, we have
\begin{equation}\label{RL-C01}
_{a}^{c}D_{t}^{\alpha}u(t) = {_{a}}D_{t}^{\alpha}u(t) - \frac{u(a)}{\Gamma (1-\alpha)} (t-a)^{-\alpha}, \quad t\in [a,b]
\end{equation}
and
\begin{equation}\label{RL-C02}
_{t}^{c}D_{b}^{\alpha}u(t) = {_{t}}D_{b}^{\alpha}u(t) - \frac{u(b)}{\Gamma (1-\alpha)}(b-t)^{-\alpha},\quad t\in [a,b].
\end{equation}
\end{Thm}
Now we consider some properties of the Riemann-Liouville fractional integral and derivative operators.
\begin{itemize}
\item[(1)]
\begin{eqnarray*}
&&_{a}I_{t}^{\alpha}(_{a}I_{t}^{\beta}u(t)) = {_{a}}I_{t}^{\alpha + \beta}u(t)\;\;\mbox{and}\\
&&_{t}I_{b}^{\alpha}(_{t}I_{b}^{\beta}u(t)) = { _{t}}I_{b}^{\alpha + \beta}u(t)\;\;\forall \alpha, \beta >0,
\end{eqnarray*}
\item[(2)] {\bf Left inverse.} Let $u \in L^{1}[a,b]$ and $\alpha >0$,
\begin{eqnarray*}
&&_{a}D_{t}^{\alpha}(_{a}I_{t}^{\alpha}u(t)) = u(t),\;\mbox{a.e.}\;t\in[a,b]\;\;\mbox{and}\\
&&_{t}D_{b}^{\alpha}(_{t}I_{b}^{\alpha}u(t)) = u(t),\;\mbox{a.e.}\;t\in[a,b].
\end{eqnarray*}
\item[(3)] For $n-1\leq \alpha < n$, if the left and right Riemann-Liouville fractional derivatives $_{a}D_{t}^{\alpha}u(t)$ and $_{t}D_{b}^{\alpha}u(t)$, of the function $u$ are integral on $[a,b]$, then
\begin{eqnarray*}
_{a}I_{t}^{\alpha}(_{a}D_{t}^{\alpha}u(t)) & = & u(t) - \sum_{k = }^{n} [_{a}I_{t}^{k-\alpha}u(t)]_{t=a} \frac{(t-a)^{\alpha -k}}{\Gamma (\alpha - k + 1)},\\
_{t}I_{b}^{\alpha}(_{t}D_{b}^{\alpha}u(t)) & = & u(t) - \sum_{k=1}^{n}[_{t}I_{n}^{k-\alpha}u(t)]_{t=b}\frac{(-1)^{n-k}(b-t)^{\alpha - k}}{\Gamma (\alpha - k +1)},
\end{eqnarray*}
for $t\in [a,b]$.
\item[(4)] {\bf Integration by parts}
\begin{equation}\label{FCeq1}
\int_{a}^{b}[_{a}I_{t}^{\alpha}u(t)]v(t)dt = \int_{a}^{b}u(t)_{t}I_{b}^{\alpha}v(t)dt,\;\alpha >0,
\end{equation}
provided that $u\in L^{p}[a,b]$, $v\in L^{q}[a,b]$ and
$$
p\geq 1,\;q\geq 1\;\;\mbox{and}\;\;\frac{1}{p}+\frac{1}{q} < 1+\alpha \;\;\mbox{or}\;\; p \neq 1,\;q\neq 1\;\;\mbox{and}\;\;\frac{1}{p} + \frac{1}{q} = 1+\alpha.
$$
\begin{equation}\label{FCeq2}
\int_{a}^{b} [_{a}D_{t}^{\alpha}u(t)]v(t)dt = \int_{a}^{b}u(t)_{t}D_{b}^{\alpha}v(t)dt,\;\;0<\alpha \leq 1,
\end{equation}
provided the boundary conditions
\begin{eqnarray*}
&u(a) = u(b) = 0,\;u'\in L^{\infty}[a,b],\;v\in L^{1}[a,b]\;\;\mbox{or}\\
&v(a) = v(b) = 0,\;v' \in L^{\infty}[a,b], \;u \in L^{1}[a,b],
\end{eqnarray*}
are fulfilled.
\end{itemize}
\subsection{Fractional Derivative Space}
In order to establish a variational structure for BVP (\ref{I01}), it is necessary to construct appropriate function spaces. For this setting we take some results from \cite{FJYZ0, FJYZ, YZ}.
Let us recall that for any fixed $t\in [0,T]$ and $1\leq p <\infty$,
\begin{eqnarray*}
\|u\|_{L^{p}[0,t]} = \left( \int_{0}^{t} |u(s)|^{p}ds \right)^{1/p},\;
\|u\|_{L^{p}} = \left( \int_{0}^{T} |u(s)|^{p}ds \right)^{1/p}\;\;\mbox{and}\;\;
\|u\|_{\infty} = \max_{t\in [0,T]}|u(t)|.
\end{eqnarray*}
\begin{Def}\label{FC-FEdef1}
Let $0< \alpha \leq 1$ and $1<p<\infty$. The fractional derivative spaces $E_{0}^{\alpha ,p}$ is defined by
\begin{eqnarray*}
E_{0}^{\alpha , p} &= & \{u\in L^{p}[0,T]/\;\;_{0}D_{t}^{\alpha}u \in L^{p}[0,T]\;\mbox{and}\;u(0) = u(T) = 0\}\\
&= & \overline{C_{0}^{\infty}[0,T]}^{\|.\|_{\alpha ,p}}.
\end{eqnarray*}
where $\|.\|_{\alpha ,p}$ is defined by
\begin{equation}\label{FC-FEeq1}
\|u\|_{\alpha ,p}^{p} = \int_{0}^{T} |u(t)|^{p}dt + \int_{0}^{T}|_{0}D_{t}^{\alpha}u(t)|^{p}dt.
\end{equation}
\end{Def}
\begin{Remark}\label{RL-Cnta}
For any $u\in E_{0}^{\alpha, p}$, nothing the fact that $u(0) = 0$, we have ${^{c}_{0}}D_{t}^{\alpha}u(t) = {_{0}}D_{t}^{\alpha}u(t)$, $t\in [0,T]$ according to (\ref{RL-C01}).
\end{Remark}
\begin{Prop}\label{FC-FEprop1}
\cite{FJYZ0} Let $0< \alpha \leq 1$ and $1 < p <\infty$. The fractional derivative space $E_{0}^{\alpha , p}$ is a reflexive and separable Banach space.
\end{Prop}
We recall some properties of the fractional space $E_{0}^{\alpha ,p}$.
\begin{Lem}\label{FC-FElem1}
\cite{FJYZ0} Let $0< \alpha \leq 1$ and $1\leq p < \infty$. For any $u\in L^{p}[0,T]$ we have
\begin{equation}\label{FC-FEeq2}
\|_{0}I_{\xi}^{\alpha}u\|_{L^{p}[0,t]}\leq \frac{t^{\alpha}}{\Gamma(\alpha + 1)} \|u\|_{L^{p}[0,t]},\;\mbox{for}\;\xi\in [0,t],\;t\in[0,T].
\end{equation}
\end{Lem}
\begin{Prop}\label{FC-FEprop3}
\cite{FJYZ} Let $0< \alpha \leq 1$ and $1 < p < \infty$. For all $u\in E_{0}^{\alpha ,p}$, we have
\begin{equation}\label{FC-FEeq3}
\|u\|_{L^{p}} \leq \frac{T^{\alpha}}{\Gamma (\alpha +1)} \|_{0}D_{t}^{\alpha}u\|_{L^{p}}.
\end{equation}
If $\alpha > 1/p$ and $\frac{1}{p} + \frac{1}{q} = 1$, then
\begin{equation}\label{FC-FEeq4}
\|u\|_{\infty} \leq \frac{T^{\alpha -1/p}}{\Gamma (\alpha)((\alpha - 1)q +1)^{1/q}}\|_{0}D_{t}^{\alpha}u\|_{L^{p}}.
\end{equation}
\end{Prop}
\begin{Remark}\label{embb}
Let $1/p< \alpha \leq 1$, if $u\in E_{0}^{\alpha, p}$, then $u\in L^{q}[0,T]$ for $q\in [p, +\infty]$. In fact
\begin{eqnarray*}
\int_{0}^{T} |u(t)|^{q}dt &=& \int_{0}^{T} |u(t)|^{q-p}|u(t)|^{p}dt\\
& \leq & \|u\|_{\infty}^{q-p} \|u\|_{L^{p}}^{p}.
\end{eqnarray*}
In particular the embedding $E_{0}^{\alpha ,p} \hookrightarrow L^{q}[0,T]$ is continuos for all $q\in [p, +\infty]$.
\end{Remark}
\noindent
According to (\ref{FC-FEeq3}), we can consider in $E_{0}^{\alpha ,p}$ the following norm
\begin{equation}\label{FC-FEeq5}
\|u\|_{\alpha ,p} = \|_{0}D_{t}^{\alpha}u\|_{L^{p}},
\end{equation}
and (\ref{FC-FEeq5}) is equivalent to (\ref{FC-FEeq1}).
\begin{Prop}\label{FC-FEprop4}
\cite{FJYZ} Let $0< \alpha \leq 1$ and $1 < p < \infty$. Assume that $\alpha > \frac{1}{p}$ and $\{u_{k}\} \rightharpoonup u$ in $E_{0}^{\alpha ,p}$. Then $u_{k} \to u$ in $C[0,T]$, i.e.
$$
\|u_{k} - u\|_{\infty} \to 0,\;k\to \infty.
$$
\end{Prop}
Now, we are going to prove that $E_{0}^{\alpha , p}$ is uniformly convex, for this fact we consider the following tools (see \cite{RAJF} for more details).
\begin{itemize}
\item[(1)] {\bf Reverse H\"older Inequality:} Let $0<p<1$, so that $p' = \frac{p}{p-1} <0$. If $u\in L^{p}(\Omega)$ and
$$
0< \int_{\Omega}|g(x)|^{p'}dx < \infty,
$$
then
\begin{equation}\label{HI}
\int_{\Omega} |f(x)g(x)|dx \geq \left( \int_{\Omega} |f(x)|^{p}dx \right)^{1/p} \left(\int_{\Omega} |g(x)|^{p'} \right)^{1/p'}.
\end{equation}
\item[(2)] {\bf Reverse Minkowski inequality:} Let $0<p<1$. If $u,v\in L^{p}(\Omega)$, the
\begin{equation}\label{MI}
\||u| + |v|\|_{L^p} \geq \|u\|_{p} + \|v\|_{p}
\end{equation}
\item[(3)] Let $z,w\in \mathbb{C}$. If $1< p \leq 2$ and $p'=\frac{p}{p-1}$, then
\begin{equation}\label{p1}
\left|\frac{z+w}{2} \right|^{p'} + \left| \frac{z-w}{2} \right|^{p'} \leq \left(\frac{1}{2}|z|^p + \frac{1}{2}|w|^p \right)^{1/(p-1)}.
\end{equation}
If $2\leq p < \infty$, then
\begin{equation}\label{p2}
\left| \frac{z+w}{2} \right|^{p} + \left| \frac{z-w}{2} \right|^{p} \leq \frac{1}{2}|z|^p + \frac{1}{2}|w|^p
\end{equation}
\end{itemize}
\begin{Lem}\label{lemu}
$(E_{0}^{\alpha ,p}, \|.\|_{\alpha , p})$ is uniformly convex.
\end{Lem}
\noindent
{\bf Proof.} Let $u,v \in E_{0}^{\alpha ,p}$ satisfy $\|u\|_{\alpha ,p} = \|v\|_{\alpha ,p} = 1$ and $\|u-v\|_{\alpha ,p} \geq \epsilon$, where $\epsilon \in (0,2)$.
\noindent
{\bf Case $p\geq 2$.} By (\ref{p2}), we have
\begin{eqnarray}\label{U01}
\left\| \frac{u+v}{2} \right\|_{\alpha ,p}^{p} + \left\| \frac{u-v}{2} \right\|_{\alpha ,p}^{p}\!\!\!\!\!& = &\!\!\! \int_{0}^{T} \left| \frac{{_{0}}D_{t}^{\alpha}u(t) + {_{0}}D_{t}^{\alpha}v(t)}{2} \right|^pdt + \int_{0}^{T} \left| \frac{{_{0}}D_{t}^{\alpha}u(t) - {_{0}}D_{t}^{\alpha}v(t)}{2} \right|^{p}dt\nonumber\\
&\leq& \frac{1}{2} \int_{0}^{t} |{_{0}}D_{t}^{\alpha}u(t)|^pdt + \frac{1}{2}\int_{0}^{T} |{_{0}}D_{t}^{\alpha}v(t)|^{p}dt\nonumber\\
& = & \frac{1}{2}\|u\|_{\alpha ,p}^{p} + \frac{1}{2}\|v\|_{\alpha ,p}^{p} = 1.
\end{eqnarray}
It follows from (\ref{U01}) that
$$
\left\| \frac{u+v}{2} \right\|_{\alpha ,p}^{p} \leq 1 - \frac{\epsilon^p}{2^p}.
$$
Taking $\delta = \delta (\epsilon)$ such that $1-(\epsilon/2)^2 = (1-\delta)^{p}$, we obtain that
$$
\left\| \frac{u+v}{2} \right\|_{\alpha ,p} \leq (1-\delta).
$$
\noindent
{\bf Case $1<p<2$.} First, note that
$$
\|u\|_{\alpha ,p}^{p'} = \left( \int_{0}^{T} \left( |{_{0}}D_{t}^{\alpha}u(t)|^{p'} \right)^{p-1}dt \right)^{\frac{1}{p-1}},
$$
where $p' = \frac{p}{p-1}$. Using the reverse Minkowski inequality (\ref{MI}) and the inequality (\ref{p1}), we get
\begin{eqnarray}\label{U02}
&&\left\| \frac{u+v}{2} \right\|_{\alpha ,p}^{p'} + \left\| \frac{u-v}{2} \right\|_{\alpha ,p}^{p'} \nonumber \\
&& = \left[ \int_{0}^{T} \left( \left| \frac{{_{0}}D_{t}^{\alpha}u(t) + {_{0}}D_{t}^{\alpha}v(t) }{2} \right|^{p'} \right)^{p-1} dt\right]^{\frac{1}{p-1}} + \left[ \int_{0}^{T} \left( \left| \frac{{_{0}}D_{t}^{\alpha}u(t) - {_{0}}D_{t}^{\alpha}v(t) }{2} \right|^{p'} \right)^{p-1}dt \right]^{\frac{1}{p-1}}\nonumber\\
&&\leq \left[ \int_{0}^{T} \left( \left| \frac{{_{0}}D_{t}^{\alpha}u(t) + {_{0}}D_{t}^{\alpha}v(t)}{2} \right|^{p'} + \left| \frac{{_{0}}D_{t}^{\alpha}u(t) - {_{0}}D_{t}^{\alpha}v(t)}{2} \right|^{p'} \right)^{p-1}dt \right]^{\frac{1}{p-1}}\nonumber\\
&& \leq \left[ \int_{0}^{T} \left( \frac{|{_{0}}D_{t}^{\alpha}u(t)|^p}{2} + \frac{|{_{0}}D_{t}^{\alpha}v(t)|^p}{2}\right)dt \right]^{p' -1}\nonumber\\
&& = \left(\frac{1}{2}\|u\|_{\alpha ,p}^{p} + \frac{1}{2}\|v\|_{\alpha ,p}^{p} \right)^{p' -1} = 1.
\end{eqnarray}
By (\ref{U02}), we have
$$
\left\| \frac{u+v}{2} \right\|_{\alpha ,p}^{p'} \leq 1 - \frac{\epsilon^{p'}}{2^{p'}}.
$$
Taking $\delta = \delta(\epsilon)$ such that $1-(\epsilon /2)^{p'} = (1-\delta)^{p'}$, we get the desired claim. $\Box$
\section{Proof of Theorem \ref{main}}
Through this section we consider: $p<q$ and $\frac{1}{p} < \alpha \leq 1$. For $u\in E_{0}^{\alpha ,p}$ we define
$$
J(u) = \frac{1}{p} \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p}dt,\;\;H(u) = \int_{0}^{T}F(t,u(t))dt,
$$
and
$$
I(u) = J(u) - H(u).
$$
Obviously, the energy functional $I: E_{0}^{\alpha,p} \to \mathbb{R}$ associated with problem (\ref{I01}) is well defined.
\begin{Lem}\label{MR1lem}
If $f$ satisfies assumption $(f_1)$, then the functional $H\in C^1(E_{0}^{\alpha,p}, \mathbb{R})$ and
$$
\langle H'(u), v \rangle = \int_{0}^{T} f(t,u(t))v(t)dt\;\;\mbox{for all}\;\;u,v\in E_{0}^{\alpha,p}.
$$
\end{Lem}
\noindent
{\bf Proof.}
\begin{itemize}
\item[(i)] $H$ is G\^ateaux-differentiable in $E_{0}^{\alpha ,p}$.
Let $u,v\in E_{0}^{\alpha ,p}$. For each $t\in [0,T]$ and $0< |\sigma| <1$, by the mean value theorem, there exists $0< \delta <1$,
\begin{eqnarray*}
\frac{1}{\sigma} (F(t,u+\sigma v) - F(t,u)) & = & \frac{1}{\sigma} \int_{0}^{u+\sigma v} f(t,s)ds - \frac{1}{\sigma} \int_{0}^{u}f(t,s)ds\\
& = & \frac{1}{\sigma} \int_{u}^{u + \sigma v} f(t,s)ds = f(t, u+\delta \sigma v)v.
\end{eqnarray*}
By ($f_1$) and Young's inequality, we get
\begin{eqnarray*}
|f(t,u+\delta \sigma v) v| &\leq& C(|v| + | u + \delta \sigma v|^{q-1}|v| )\\
&\leq& C(2|v|^q + |u+\delta \sigma v|^q + 1) \\
&\leq& a2^q (|v|^q + |u|^q + 1).
\end{eqnarray*}
Since $q>1 $, by (\ref{FC-FEeq3}) we have $u,v \in L^{q}[0,T]$. Moreover, the Lebesgue Dominated Convergence Theorem implies
\begin{eqnarray*}
\lim_{\sigma \to 0} \frac{1}{\sigma} (H(u+\sigma v) - H(u)) & = & \lim_{\sigma \to 0} \int_{0}^{T} f(t, u + \delta \sigma v)vdt\\
& = & \int_{0}^{T} \lim_{\sigma \to 0}f(t, u + \delta \sigma v)vdt = \int_{0}^{T}f(t,u)vdt.
\end{eqnarray*}
\item[(ii)] Continuity of G\^ateaux-derivative.
Let $\{u_n\}, u\in E_{0}^{\alpha ,p}$ such that $u_n \to u$ strongly in $E_{0}^{\alpha ,p}$ as $n\to \infty$. Without loss of generality, we assume that $u_n(t) \to u(t)$ a.e. in $[0,T]$. By ($f_1$), for any $I \subset [0,T]$,
\begin{eqnarray}\label{MR01}
\int_{I} |f(t,u_n)|^{q'}dt &\leq& C^{q'}\int_{I} (1 + |u_n|^{q-1})^{q'}dt\nonumber\\
&\leq& C^{q'}2^{q'}\int_{I} (1 + |u_n|^q)dt\nonumber\\
&\leq & \overline{C}[\mu(I) + \|u_n\|_{\infty}^q \mu(I)],
\end{eqnarray}
where $\mu$ denotes the Lebesgue measure of $I$. It follows from (\ref{MR01}) that the sequence $\{|f(t,u_n) - f(t,u)|^{q'}\}$ is uniformly bounded and equi-integrable in $L^1[0,T]$. The Vitali Convergence Theorem implies
$$
\lim_{n\to \infty} \int_{0}^{T} |f(t,u_n) - f(t,u)|^{q'}dt = 0.
$$
Thus, by H\"older inequality and Remark \ref{embb}, we obtain
\begin{eqnarray*}
\|H'(u_n) - H(u)\|_{(E_{0}^{\alpha ,p})^{*}} & = & \sup_{v\in E_{0}^{\alpha,p}, \|v\|_{\alpha,p}=1} \left| \int_{0}^{T} (f(t,u_n) - f(t,u))vdt \right|\\
&\leq & \|f(t,u_n) - f(t,u)\|_{L^{q'}} \|v\|_{L^q}\\
&\leq& K\|f(t,u_n) - f(t,u)\|_{L^{q'}}\\
&\to & 0,
\end{eqnarray*}
as $n\to \infty$. Hence, we complete the proof of Lemma. $\Box$
\end{itemize}
\begin{Lem}\label{MR2lem}
The functional $J \in C^{1}(E_{0}^{\alpha ,p}, \mathbb{R})$ and
$$
\langle J'(u), v \rangle = \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p-2}{_{0}}D_{t}^{\alpha}u(t){_{0}}D_{t}^{\alpha}v(t)dt,
$$
for all $u,v\in E_{0}^{\alpha ,p}$. Moreover, for each $u\in E_{0}^{\alpha ,p}$, $J'(u) \in (E_{0}^{\alpha,p})^{*}$, where $(E_{0}^{\alpha ,p})^{*}$ denotes the dual of $E_{0}^{\alpha,p}$.
\end{Lem}
\noindent
{\bf Proof.}
First, it is easy to see that
\begin{equation}\label{MR02}
\langle J'(u), v \rangle = \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p-2}{_{0}}D_{t}^{\alpha}u(t) {_{0}}D_{t}^{\alpha}v(t)dt,
\end{equation}
for all $u,v \in E_{0}^{\alpha ,p}$. It follows from (\ref{MR02}) that for each $u\in E_{0}^{\alpha ,p}$, $J'(u) \in (E_{0}^{\alpha ,p})^{*}$.
Next, we prove that $J \in C^1(E_{0}^{\alpha ,p}, \mathbb{R})$. For the proof we need the following inequalities, (see \cite{GDPJJM})
\begin{itemize}
\item[(i)] If $p\in [2,\infty)$ then it holds
\begin{equation}\label{MR03}
\left| |z|^{p-2}z - |y|^{p-2}y \right| \leq \beta|z-y|(|z| + |y|)^{p-2}\;\;\mbox{for all}\;y,z\in \mathbb{R},
\end{equation}
with $\beta$ independent of $y$ and $z$;
\item[(ii)] If $p\in (1,2]$ then it holds:
\begin{equation}\label{MR04}
\left| |z|^{p-2}z - |y|^{p-2}y \right| \leq \beta |z-y|^{p-1}\;\;\mbox{for all}\;y,z\in \mathbb{R},
\end{equation}
with $\beta$ independent of $y$ and $z$.
\end{itemize}
We define $g:E_{0}^{\alpha ,p} \to L^{p'}[0,T]$ by
$$
g(u) = |{_{0}}D_{t}^{\alpha}u|^{p-2}{_{0}}D_{t}^{\alpha}u,
$$
for $u\in E_{0}^{\alpha ,p}$. Let us prove that $g$ is continuous.
\noindent
{\bf Case $p \in (2,\infty)$.} For $u,v \in E_{0}^{\alpha,p}$, by (\ref{MR03}) and H\"older inequality we have:
\begin{eqnarray}\label{MR05}
\int_{0}^{T} |g(u) - g(v)|^{p'} dt &=& \int_{0}^{T} \left| |{_{0}}D_{t}^{\alpha}u|^{p-2}{_{0}}D_{t}^{\alpha}u - |{_{0}}D_{t}^{\alpha}v|^{p-2}{_{0}}D_{t}^{\alpha}v \right|dt\nonumber\\
&\leq & \beta \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u - {_{0}}D_{t}^{\alpha}v|^{p'} \left( |{_{0}}D_{t}^{\alpha}u| + |{_{0}}D_{t}^{\alpha}v| \right)^{p'(p-2)}dt\nonumber\\
&\leq& \beta \left( \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u - {_{0}}D_{t}^{\alpha}v|^{p} dt \right)^{p'/p} \left( \int_{0}^{T} \left[ |{_{0}}D_{t}^{\alpha}u| + |{_{0}}D_{t}^{\alpha}v| \right]^{p}dt \right)^{\frac{p'(p-2)}{p}}\nonumber\\
& = & \beta \|{_{0}}D_{t}^{\alpha}u - {_{0}}D_{t}^{\alpha}v\|_{L^p}^{p'} \||{_{0}}D_{t}^{\alpha}u| + |{_{0}}D_{t}^{\alpha}v|\|_{L^p}^{p'(p-2)}\nonumber\\
&\leq& \overline{C} \|u-v\|_{\alpha ,p}^{p'} \left(\|u\|_{\alpha ,p} + \|v\|_{\alpha,p} \right)^{p'(p-2)}
\end{eqnarray}
with $\overline{C}$ constant independent of $u$ and $v$.
\noindent
{\bf Case $p\in (1,2]$.} For $u,v\in E_{0}^{\alpha,p}$, by (\ref{MR04}) it follows
\begin{eqnarray}\label{MR06}
\int_{0}^{T} |g(u) - g(v)|^{p'}dt & = & \int_{0}^{T} \left| |{_{0}}D_{t}^{\alpha}u|^{p-2} {_{0}}D_{t}^{\alpha}u - |{_{0}}D_{t}^{\alpha}v|^{p-2}{_{0}}D_{t}^{\alpha}v \right|^{p'}dt\nonumber\\
&\leq& \beta \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u - {_{0}}D_{t}^{\alpha}v|^{p'(p-1)}dt\nonumber\\
&\leq& \overline{C}_1\|u-v\|_{\alpha ,p}^{p-1}
\end{eqnarray}
with $\overline{C}_1$ constant independent of $u$ and $v$. From (\ref{MR05}) and (\ref{MR06}) the continuity of $g$ is obvious.
On the other hand, we claim that
\begin{equation}\label{MR07}
\|J'(u) - J'(v)\|_{(E_{0}^{\alpha , p})^{*}} \leq K\|g(u) - g(v)\|_{L^{p'}}
\end{equation}
with $K>0$ constant independent of $u,v \in E_{0}^{\alpha,p}$. Indeed, by the H\"older inequality we have:
\begin{eqnarray*}
\left| \langle J'(u) - J'(v), \varphi \rangle \right| &\leq& \int_{0}^{T} |g(u) - g(v)||{_{0}}D_{t}^{\alpha} \varphi|dt\\
&\leq& \left( \int_{0}^{T} |g(u) - g(v)|^{p'} \right)^{\frac{1}{p'}} \left( \int_{0}^{T} |{_{0}}D_{t}^{\alpha}\varphi|^pdt \right)^{\frac{1}{p}}\\
&\leq& K \|g(u) - g(v)\|_{L^{p'}}\|\varphi\|_{\alpha ,p}
\end{eqnarray*}
for $u,v,\varphi \in E_{0}^{\alpha ,p}$, proving (\ref{MR07}).
Now, by the continuity of $g$ and (\ref{MR07}), the conclusion of the Lemma follows in a standard way. $\Box$
Combining Lemma \ref{MR1lem} and Lemma \ref{MR2lem}, we get that $I\in C^1(E_{0}^{\alpha ,p}, \mathbb{R})$ and
$$
\langle I'(u),v \rangle = \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u|^{p-2}{_{0}}D_{t}^{\alpha}u {_{0}}D_{t}^{\alpha}vdt - \int_{0}^{T} f(t,u)vdt,
$$
for all $u,v\in E_{0}^{\alpha ,p}$.
\begin{Lem}\label{PT1lem}
Suppose that $f$ satisfies $(f_{1}) - (f_{3})$. Then there exist $\rho >0$ and $\beta >0$ such that
$$
I(u) \geq \alpha >0,
$$
for any $u\in E_{0}^{\alpha ,p}$ with $\|u\|_{\alpha ,p} = \rho$.
\end{Lem}
\noindent
{\bf Proof.} By assumptions $(f_1)$ and $(f_3)$, for any $\epsilon >0$, there exists $C_{\epsilon} >0$ such that for any $\xi \in \mathbb{R}$ and a.e. $t\in [0,T]$, we have
\begin{equation}\label{MR08}
|f(t,\xi)| \leq p\epsilon|\xi|^{p-1} + qC_{\epsilon} |\xi|^{q-1}.
\end{equation}
It follows from (\ref{MR08}) that
\begin{equation}\label{MR09}
|F(t,\xi)| \leq \epsilon |\xi|^{p} + C_\epsilon |\xi|^{q}.
\end{equation}
Let $u\in E_{0}^{\alpha ,p}$. By (\ref{MR09}), Proposition \ref{FC-FEprop3} and Remark \ref{embb}, we obtain
\begin{eqnarray}\label{MR10}
I(u) & = & \frac{1}{p}\int_{0}^{T}|{_{0}}D_{t}^{\alpha}u(t)|^pdt - \int_{0}^{T}F(t,u(t))dt\nonumber\\
&\geq& \frac{1}{p} \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u(t)|^{p}dt - \epsilon \int_{0}^{T}|u(t)|^pdt - C_{\epsilon}\int_{0}^{T}|u(t)|^{q}dt\nonumber\\
&\geq& \frac{1}{p}\|u\|_{\alpha ,p}^{p} - \frac{\epsilon T^{\alpha}}{\Gamma (\alpha +1)}\|u\|_{\alpha ,p}^{p} + C_\epsilon \mathcal{K}\|u\|_{\alpha ,p}^{q}
\end{eqnarray}
where
$$\mathcal{K} = \frac{T^{\alpha q + 1 - \frac{q}{p}}}{(\Gamma (\alpha)[(\alpha - 1)q + 1]^{1/q})^{q-p}\Gamma (\alpha + 1)^{p}}.$$
Choosing $\epsilon = \frac{\Gamma (\alpha +1)}{2pT^{\alpha}}$, by (\ref{MR10}), we have
$$
I(u) \geq \frac{1}{2p}\|u\|_{\alpha ,p}^{p} - C\|u\|_{\alpha,p}^{q} \geq \|u\|_{\alpha,p}^{p}\left( \frac{1}{2p} - C\|u\|_{\alpha,p}^{q-p} \right),
$$
where $C$ is a constant only depending on $\alpha, p, T$. Now, let $\|u\|_{\alpha,p} = \rho >0$. Since $q>p$, we can choose $\rho$ sufficiently small such that
$$
\frac{1}{2p} - C\rho^{q-p} >0,
$$
so that
$$
I(u) \geq \rho^{p}\left(\frac{1}{2p} - C\rho^{q-p} \right) =:\beta >0.
$$
Thus, the Lemma is proved. $\Box$
\begin{Lem}\label{PT2lem}
Suppose that $f$ satisfies $(f_{1}) - (f_{3})$. Then there exists $e\in C_{0}^{\infty}[0,T]$ such that $\|e\|_{\alpha ,p} \geq \rho$ and $I(e) < \beta$, where $\rho$ and $\beta$ are given in Lemma \ref{PT1lem}.
\end{Lem}
\noindent
{\bf Proof.} From assumption ($f_2$) it follows that
\begin{equation}\label{MR11}
F(t, \xi) \geq r^{-\mu} \min\{F(t,r), F(t,-r)\}|\xi|^{\mu}
\end{equation}
for all $|\xi| >r$ and a.e. $t\in [0,T]$. Thus, by (\ref{MR11}) and $F(t,\xi) \leq \max_{|\xi| \leq r}F(t,\xi)$ for all $|\xi| \leq r$, we obtain
\begin{equation}\label{MR12}
F(t,\xi) \geq r^{-\mu} \min\{F(t,r), F(t,-r)\}|\xi|^{\mu} - \max_{|\xi|\leq r} F(t,\xi) - \min \{F(t,r), F(t,-r)\},
\end{equation}
for any $\xi \in \mathbb{R}$ and a.e. $t\in [0,T]$.
Since $C_{0}^{\infty}[0,T] \subset E_{0}^{\alpha,p}$, we can fix $u_{0} \in C_{0}^{\infty}[0,T]$ such that $\|u_0\|_{\alpha ,p} = 1$. Now, let $\sigma \geq 1$, by (\ref{MR12}), we have
\begin{eqnarray*}
I(\sigma u_0) & = & \frac{\sigma^{p}}{p}\|u_0\|_{\alpha ,p}^{p} - \int_{0}^{T}F(t,\sigma u_0(t))dt\\
&\leq& \frac{\sigma^p}{p} - r^{-\mu}\sigma^{\mu} \int_{0}^{T}\min \{F(t,r), F(t,-r)\}|u_0(t)|^{\mu}dt\\
&&+ \int_{0}^{T} \max_{|\xi|\leq r}F(t,\xi) + \min \{F(t,r), F(t,-r)\}dt.
\end{eqnarray*}
From assumption $(f_1)$ and $(f_2)$, we get that $0< F(t,\xi) \leq C(|r| + |r|^{q})$ for $|\xi| \leq r$ a.e.$t\in [0,T]$. Thus,
$$
0< \min \{F(t, r), F(t,-r)\} < C(|r| + |r|^q) \;\;\mbox{a.e.}\; t\in [0,T].
$$
Since $\mu >p$, passing to the limit as $t\to \infty$, we obtain that $I(tu_0) \to -\infty$. Thus, the assertion follows by taking $e = Tu_0$ with $T$ sufficiently large. $\Box$
\begin{Lem}\label{PT3lem}
Suppose that $f$ satisfies $(f_{1}) - (f_{3})$. Then the functional $I$ satisfies (PS) condition.
\end{Lem}
\noindent
{\bf Proof.} For any sequence $\{u_n\}\subset E_{0}^{\alpha,p}$ such that $I(u_n)$ is bounded and $I'(u_n) \to 0$ as $n\to \infty$, there exists $M>0$ such that
$$
|\langle I'(u_n), u_n \rangle| \leq M\|u_n\|_{\alpha,p}\;\;\mbox{and}\;\;|I(u_n)| \leq M.
$$
For each $n\in \mathbb{N}$, we denote
$$
\Omega_{n} = \{t\in [0,T]|\;|u_n(t)| \geq r\},\;\Omega'_n = [0,T]\setminus \Omega_n.
$$
We have
\begin{equation}\label{MR13}
\frac{1}{p}\|u_n\|_{\alpha,p}^{p} - \left( \int_{\Omega_n}F(t,u_n) + \int_{\Omega'_n} F(t,u_n) \right) \leq M.
\end{equation}
We proceed with obtaining estimations independent of $n$ for the integrals in (\ref{MR13}). Let $n\in \mathbb{N}$ be arbitrary chosen. From assumption ($f_1$), we have
\begin{equation}\label{MR14}
|F(t,\xi)| \leq 2C(|\xi|^{q} + 1).
\end{equation}
If $t\in \Omega'_n$, then $|u_n(t)| <r$ and by (\ref{MR14}), it follows
$$
F(t,u_n) \leq 2C(|u_n|^{q} + 1)\leq 2C(r^q + 1)
$$
and hence
\begin{equation}\label{MR15}
\int_{\Omega'_n} F(t,u_n)dt \leq 2CTr^q + T = K_1.
\end{equation}
If $t\in \Omega_n$, then $|u_n(t)| \geq r$ and by ($f_2$) it holds
$$
F(t,u_n) \leq \frac{1}{\mu}f(t,u_n(t))u_n(t)
$$
which gives
\begin{equation}\label{MR16}
\int_{\Omega_n} F(t,u_n)dt \leq \int_{\Omega_n} \frac{1}{\mu}f(t,u_n(t))u_n(t)dt = \frac{1}{\mu} \left( \int_{0}^{T} f(t,u_n)u_ndt - \int_{\Omega'_n} f(t,u_n)u_ndt \right)
\end{equation}
By ($f_1$), we deduce
\begin{eqnarray*}
\left| \int_{\Omega'_n}f(t,u_n)u_ndt \right| &\leq& \int_{\Omega'_n} C(|u_n| + |u_n|^q)dt\\
&\leq& CTr + CTr^q = K_2,
\end{eqnarray*}
which yields
\begin{equation}\label{MR17}
-\frac{1}{\mu}\int_{\Omega'_n}f(t,u_n)u_ndt \leq \frac{K_2}{\mu}.
\end{equation}
Finally, by (\ref{MR13}), (\ref{MR15}), (\ref{MR16}) and (\ref{MR17}) we obtain
\begin{eqnarray}\label{MR18}
\frac{1}{p}\|u_n\|_{\alpha,p}^{p} - \frac{1}{\mu}\int_{0}^{T}f(t,u_n)u_ndt &\leq& M + K_1 + \frac{K_2}{\mu} = K,\nonumber\\
\frac{1}{p}\|u_n\|_{\alpha,p}^{p} - \frac{1}{\mu}\langle H'(u_n), u_n\rangle &\leq& K
\end{eqnarray}
On the other hand, since $|\langle I'(u_n), u_n \rangle| \leq M \|u_n\|_{\alpha ,p}$ for $n\geq n_0$. Consequently, for all $n\geq n_0$, we have
$$
|\|u_n\|_{\alpha,p}^{p} - \langle H'(u_n), u_n\rangle| \leq M\|u_n\|_{\alpha,p}
$$
which gives
\begin{equation}\label{MR19}
-\frac{1}{\mu}\|u_n\|_{\alpha ,p}^{p} - \frac{M}{\mu} \|u_n\|_{\alpha,p} \leq -\frac{1}{\mu} \langle H'(u_n), u_n \rangle.
\end{equation}
Now, from (\ref{MR18}) and (\ref{MR19}) it results
$$
\left(\frac{1}{p} - \frac{1}{\mu} \right)\|u_n\|_{\alpha ,p}^{p} - \frac{M}{\mu}\|u_n\|_{\alpha,p} \leq K
$$
and taking into account that $\mu >p$, we conclude that $\{u_n\}$ is bounded. Since $E_{0}^{\alpha,p}$ is a reflexive Banach space, up to a subsequence, still denoted by $\{u_n\}$ such that $u_n \rightharpoonup u$ in $E_{0}^{\alpha,p}$. Then $\langle I'(u_n), u_n-u \rangle \to 0$. Thus, we obtain
\begin{eqnarray}\label{MR20}
\langle I'(u_n), u_n - u \rangle & = & \int_{0}^{T} |{_{0}}D_{t}^{\alpha}u_n|^{p-2}{_{0}}D_{t}^{\alpha}u_n({_{0}}D_{t}^{\alpha}u_n-{_{0}}D_{t}^{\alpha}u)dt - \int_{0}^{T}f(t,u_n)(u_n-u)dt\nonumber\\
&\to& 0
\end{eqnarray}
as $n\to \infty$. Moreover, by Proposition \ref{FC-FEprop4},
\begin{equation}\label{MR21}
u_n \to u\;\mbox{strongly in}\; C[0,T].
\end{equation}
From (\ref{MR21}), $\{u_n\}$ is bounded in $C[0,T]$, the by assumption ($f_1$), we have
\begin{eqnarray*}
\left| \int_{0}^{T} f(t,u_n)(u_n - u)dt \right| &\leq& \int_{0}^{T}|f(t,u_n)||u_n - u|dt\\
&\leq& C\int_{0}^{T} |u_n - u|dt + C\int_{0}^{T} |u_n|^{q-1}|u_n - u|dt\\
&\leq& CT\|u_n - u\|_{\infty} + CT\|u_n\|_{\infty}^{q-1}\|u_n - u\|_{\infty}.
\end{eqnarray*}
This combined with (\ref{MR21}) follows
$$
\lim_{n\to \infty} \int_{0}^{T}f(t,u_n)(u_n-u)dt = 0,
$$
hence one has
\begin{equation}\label{MR22}
\int_{0}^{T} |{_{0}}D_{t}^{\alpha}u_n|^{p-2}{_{0}}D_{t}^{\alpha}u_n({_{0}}D_{t}^{\alpha}u_n - {_{0}}D_{t}^{\alpha}u)dt \to 0\;\mbox{as}\;n\to \infty.
\end{equation}
Using the standard inequality given by
\begin{eqnarray*}
&& (|z|^{p-2}z - |y|^{p-2}y)(z-y) \geq C_{p}|z-y|^{p}\;\;\mbox{if}\;\;p\geq 2\\
&&(|z|^{p-2}z - |y|^{p-2}y)(z-y) \geq \tilde{C}_p\frac{|z-y|^2}{(|z| + |y|)^{2-p}}\;\;\mbox{if} \;\;1<p<2.
\end{eqnarray*}
(see \cite{IPe}). From which we obtain for $p>2$
\begin{eqnarray}\label{MR23}
\int_{0}^{T} |{_{0}}D_{t}^{\alpha}u_n - {_{0}}D_{t}^{\alpha}u|^pdt &\leq& \frac{1}{C_p} \int_{0}^{T} \left[ |{_{0}}D_{t}^{\alpha}u_n|^{p-2}{_{0}}D_{t}^{\alpha}u_n - |{_{0}}D_{t}^{\alpha}u|^{p-2}{_{0}}D_{t}^{\alpha}u \right] ({_{0}}D_{t}^{\alpha}u_n - {_{0}}D_{t}^{\alpha}u)dt\nonumber\\
&&\to 0,
\end{eqnarray}
as $n\to \infty$. For $1<p<2$, by reverse H\"older inequality, we have
\begin{eqnarray}\label{MR24}
\int_{0}^{T} |{_{0}}D_{t}^{\alpha}u_n - {_{0}}D_{t}^{\alpha}u|^pdt & \leq & \tilde{C}_{p}^{-\frac{p}{2}} \left( \int_{0}^{T}(|{_{0}}D_{t}^{\alpha}u_n| + |{_{0}}D_{t}^{\alpha}u|)^pdt \right)^{\frac{2-p}{2}}\nonumber\\
&\times&\!\!\!\!\! \left(\int_{0}^{T}[|{_{0}}D_{t}^{\alpha}u_n|^{p-2}{_{0}}D_{t}^{\alpha}u_n - |{_{0}}D_{t}^{\alpha}u|^{p-2}{_{0}}D_{t}^{\alpha}u]({_{0}}D_{t}^{\alpha}u_n - {_{0}}D_{t}^{\alpha}u)dt\right)^{p/2}\nonumber\\
&\leq&\!\!\!\! \overline{C} \left(\int_{0}^{T}[|{_{0}}D_{t}^{\alpha}u_n|^{p-2}{_{0}}D_{t}^{\alpha}u_n - |{_{0}}D_{t}^{\alpha}u|^{p-2}{_{0}}D_{t}^{\alpha}u][{_{0}}D_{t}^{\alpha}u_n - {_{0}}D_{t}^{\alpha}u]dt \right)^{p/2}\nonumber\\
&\to &0,
\end{eqnarray}
as $n\to \infty$. Combing (\ref{MR23}) with (\ref{MR24}), we get that $u_n \to u$ strongly in $E_{0}^{\alpha,p}$ as $n \to \infty$. Therefore, $I$ satisfies (PS) condition. $\Box$
\noindent
{\bf Proof of Theorem \ref{main}.}
Since Lemma \ref{PT1lem} - Lemma \ref{PT3lem} hold, the Mountain pass Theorem (see \cite{PR}) gives that there exists a critical point $u\in E_{0}^{\alpha ,p}$ of $I$. Moreover,
$$
I(u) \geq \beta > 0 = I(0).
$$
Thus, $u\neq 0$. $\Box$ | {"config": "arxiv", "file": "1412.6438.tex"} |
TITLE: What do I do with these equations to create a Jacobian matrix?
QUESTION [2 upvotes]: My instructions were to find equilibrium values (the picture I added is only showing E0, I was hoping if I got it figured out I could do the others rather than someone try to do all of them for me), which my professor said to set the equations equal to zero and solve, form a jacobian matrix using all partials, take the jacobian using equilibrium values to find eigenvalues. I've done none of these things in any math class and my professor said he knew that, but I feel confident I can do the jacobian matrices if someone would explain to me what I do with these initial equations to get them into a jacobian. Thank you in advance, especially if you explain rather than just post an answer because I want to learn what is happening :)
REPLY [2 votes]: The Jacobian is the just the matrix of partial derivatives. You can compute it row-by-row. For example the first equation is:
$$f_1(T,D,C) = \lambda -\mu T - \beta T C,$$
which has partial derivatives,
\begin{align}
\frac{d f_1}{dT} &= -\mu -\beta C \\
\frac{d f_1}{d D} &= 0 \\
\frac{d f_1}{d C} &= -\beta T.
\end{align}
This gives us the first row of the Jacobian matrix:
$$J(T,D,C) = \begin{bmatrix}
-\mu -\beta C & 0 & -\beta T \\
&\text{second row of Jacobian} \\
&\text{third row...}
\end{bmatrix}$$.
It is a matrix that depends on the parameters $T,D,C$. So you can plug in the specific values $(T,D,C) = (T_0, 0, 0)$ to get:
$$J(T_0,0,0) = \begin{bmatrix}
-\mu -\beta \cdot 0 & 0 & -\beta T_0 \\
&\dots \\
&\dots
\end{bmatrix}$$.
Hopefully you can do the rest of the problem, seeing how it is done for the first row.
Jacobians are a very interesting concept - if you have time you should consider looking more into them to understand the meaning, which can be obscured if you focus too much on computation. Heres a picture I made showing what the Jacobian matrix means geometrically for a system with two equations and two variables: | {"set_name": "stack_exchange", "score": 2, "question_id": 951917} |
TITLE: Let, $n$ be a positive integer such that every group of order $n$ is cyclic
QUESTION [1 upvotes]: Let, $n$ be a positive integer such that every group of order $n$ is cyclic. Then prove that for all prime numbers $p$, $p^2\nmid n$.
My attempts:
Let, $G$ be a group of order $n$ and $G$ is cyclic.
If $p^2$ divides $n$ for all prime $p$ then $G$ has a subgroup of order $p^2$. Since $p^2|n$ so $\exists$ $n'$ such that $n=n'p^2$ and $gcd(n',p^2)=1$. So, $G=\mathbb Z/p^2 \mathbb Z \times \mathbb Z/n'\mathbb Z$. If $\mathbb Z/p^2 \mathbb Z= \mathbb Z/p \mathbb Z\times \mathbb Z/p \mathbb Z$ then we can say that $G$ is not cyclic i.e. we can arrive at a contradiction. But oher case I can not arrive at a contradiction.
Can anyone help me in this regard?
Thanks in advance.
REPLY [4 votes]: All this problem asks is that for such $n$ that is not square-free you devise a group of order $n$ that is not cyclic. Assume $n=p^{\ell}m$ where $\ell$ is an integer at least $2$, and $\gcd(p,m)=1$ and where $m$ may be $1$. Then the group $$\underbrace{\mathbb{Z}/p\mathbb{Z}\times \dots\times \mathbb{Z}/p\mathbb{Z}}_{\ell \text{ factors}}\times\mathbb{Z}/m\mathbb{Z}$$ [where the operation is component-wise addition] has order $p^{\ell}m=n$ and is not cyclic | {"set_name": "stack_exchange", "score": 1, "question_id": 4203278} |
TITLE: Show that $f = \sum_{k=1}^{\infty} \frac{1}{k} \mathbb{1}_{A_k}$ is a representation for a measurable function
QUESTION [3 upvotes]: I want to show that for a (Borel-)measurable Function $f \in \mathcal{M}^+(\Omega,\mathfrak{S})$, $f: \Omega \rightarrow [0,\infty])$ exists a representation of the form
$$f = \sum_{k=1}^{\infty} \frac{1}{k} \mathbb{1}_{F_k}$$
for $\mathfrak{S}$ measurable $F_k$.
I think I need to look at $F_1:=[f\geq 1]$ and for $k\geq2$:
$$F_k:=[f\geq \frac{1}{k} + \sum_{i=1}^{k-1} \frac{1}{i} \mathbb{1}_{F_i}]$$
but I have no idea if that representation even makes sense.
Any help/solutions would be greatly appreciated.
REPLY [4 votes]: OP's construction indeed works.
1. Let $f_0 = 0$, and we define $(f_k)_{k\geq 1}$ and $(F_k)_{k\geq 1}$ recursively as follows: Suppose $ f_{k-1}$ has been defined as a measurable function satisfying $f_{k-1} \leq f$ on $\Omega$. Then let
$$ F_k = \{\omega \in \Omega : f(\omega) \geq \frac{1}{k} + f_{k-1}(\omega)\}. $$
Since both $f$ and $f_k$ are measurable, $F_k$ is also a measurable set. Then let
$$f_k = f_{k-1} + \frac{1}{k} \mathbf{1}_{F_k}.$$
Since both $f_{k-1}$ and $F_k$ are measurable, so is $f_k$. Moreover, the induction hypothesis and the definition of $F_k$ together show that $f_k \leq f$ on $\Omega$.
2. By the construction, $(f_k)$ is an increasing sequence of measurable functions, and so, the function $\tilde{f}$ defined by
$$ \tilde{f} = \lim_{k\to\infty} f_k = \sum_{k=1}^{\infty} \frac{1}{k} \mathbf{1}_{F_k}$$
is also a measurable function that still satisfies $\tilde{f} \leq f$ on $\Omega$. Now we show that $f = \tilde{f}$. Assume otherwise that there exists $\omega \in \Omega$ such that $\tilde{f}(\omega) < f(\omega)$. In particular, $\tilde{f}(\omega)$ is finite. Let $\varepsilon = f(\omega) - \tilde{f}(\omega) > 0$. Then $\omega \in F_k$ holds for any $k$ satisfying $\frac{1}{k} \leq \varepsilon$, and so,
$$ \tilde{f}(\omega) \geq \sum_{k \geq 1/\varepsilon} \frac{1}{k} = \infty, $$
a contradiction! Therefore $\tilde{f} = f$. | {"set_name": "stack_exchange", "score": 3, "question_id": 4448744} |
\chapter{The Symplectic Category}\label{chap:symp}
In this chapter, we provide the necessary background on the symplectic category. In particular, the characterizations of symplectic reduction and of symplectic groupoids given here will be used throughout this thesis. We also discuss some new ideas concerning fiber products and simplicial objects in the symplectic category.
\section{Motivation}
As a first attempt to construct a category whose objects are symplectic manifolds, one may think to allow only symplectomorphisms as morphisms. This gives a perfectly nice category but is too restrictive since, for example, there will be no morphisms between symplectic manifolds of different dimensions. Even allowing general symplectic maps (i.e. maps $f: (X,\omega_X) \to (Y,\omega_Y)$ such that $f^*\omega_Y = \omega_X$) does not give enough morphisms since this condition requires that $f$ be an immersion.
Rather, it turns out that a good notion of ``morphism'' is provided by the notion of a \emph{canonical relation}. We point out here two motivations for this. First, it is well-known that a diffeomorphism $f: X \to Y$ between symplectic manifolds $X$ and $Y$ is a symplectomorphism if and only if its graph is a lagrangian submanifold of $\overline X \times Y$. Hence, it makes sense to consider arbitrary lagrangian submanifolds of this product as a kind of ``generalized'' symplectomorphisms.
The second motivation comes from a quantization perspective. Under the ``quantization dictionary''---describing a ``functor'' $Q$ from a category of symplectic manifolds to a category of vector spaces\footnote{One usually requires that these actually be Hilbert spaces.}---symplectic manifolds should quantize to vector spaces and lagrangian submanifolds to vectors:
\[
Q: X \mapsto Q(X),\ L \subset X \mapsto v \in Q(X).
\]
Furthermore, under this correspondence duals should quantize to duals and products to tensor products:
\[
Q: \overline{X} \mapsto Q(X)^*,\ X \times Y \mapsto Q(X) \otimes Q(Y).
\]
Then considering a linear map $Q(X) \to Q(Y)$ as an element of $Q(X)^* \otimes Q(Y)$, we see that the object which should quantize to a linear map is a lagrangian submanifold of $\overline{X} \times Y$; this is what we will call a \emph{canonical relation}.
\section{Symplectic Linear Algebra}
Here we collect a few basic facts from symplectic linear algebra which will be useful. Throughout, we assume that $(U,\omega_U)$, $(V,\omega_V)$, and $(W,\omega_W)$ are symplectic vector spaces. As usual, a bar written over a symplectic vector space indicates the the sign of the symplectic form is switched, and $^\perp$ indicates the symplectic orthogonal of a subspace.
\begin{prop}\label{prop:dom}
Suppose that $L \subset \overline{V} \otimes W$ is a lagrangian subspace. Then the image $pr_V(L)$ of $L$ under the projection $pr_V: \overline{V} \oplus W \to \overline{V}$ to the first factor is a coisotropic subspace of $\overline{V}$, and hence of $V$ as well.
\end{prop}
\begin{proof}
Let $v \in pr_V(L)^\perp$. For any $u + w \in L$, we have
\[
-\omega_V \oplus \omega_W(v+0,u+w) = -\omega_V(v,u) + \omega_W(0,w) = 0
\]
since $u \in pr_V(L)$ and $v$ is in the symplectic orthogonal to this space. Thus since $L$ is lagrangian, $v + 0 \in L$. Hence $v \in pr_V(L)$ so $pr_L(V)$ is coisotropic as claimed.
\end{proof}
By exchanging the roles of $V$ and $W$ above and exchanging the components of $L$, we see that the same result holds for the image of $L$ under the projection to the second factor.
\begin{prop}\label{prop:comp}
Suppose that $L \subset \overline{U} \times V$ and $L' \subset \overline{V} \times W$ are lagrangian subspaces such that $L \oplus L'$ and $U \oplus \Delta_V \oplus W$ are transverse subspaces of $\overline{U} \oplus V \oplus \overline{V} \oplus W$, where $\Delta_V \subset V \oplus \overline{V}$ is the diagonal. Then the image $L' \circ L$ of
\[
L \times_V L' := (L \oplus L') \cap (U \oplus \Delta_V \oplus W)
\]
under the natural projection from $\overline{U} \oplus V \oplus \overline{V} \oplus W$ to $\overline{U} \oplus W$ is lagrangian.
\end{prop}
\begin{proof}
We first claim that the image $L' \circ L$ of the above projection is isotropic in $\overline{U} \oplus W$. Indeed, suppose that $(u,w), (u',w') \in L \circ L'$. Then there are $v,v' \in V$ such that
\[
(u,v) \in L, (v,w) \in L' \text{ and } (u',v') \in L, (v',w') \in L'.
\]
Since $L$ and $L'$ are lagrangian (and hence isotropic), we have
\[
0 = -\omega_U \oplus \omega_V((u,v),(u',v')) = -\omega_U(u,u') + \omega_V(v,v'),
\]
and
\[
0 = -\omega_V \oplus \omega_W((v,w),(v',w')) = -\omega_V(v,v') + \omega_W(w,w').
\]
Thus
\[
-\omega_U \oplus \omega_W((u,w),(u',w')) = -\omega_U(u,u') + \omega_W(w,w') = -\omega_V(v,v') + \omega_V(v,v') = 0,
\]
showing that $L' \circ L$ is isotropic as claimed.
Next, a simple dimension count using transversality shows that $L \times_V L'$ has half the dimension of $\overline U \oplus W$. And finally, if $(0,w,w,0) \in L \times_V L'$, then it is to see that $(0,w,w,0)$ annihilates $U \oplus \Delta_V \oplus W$ and $L \times L'$ with respect to the symplectic form
\[
-\omega_U \oplus \omega_V \oplus (-\omega_V) \oplus \omega_W,
\]
and hence annihilates all of $\overline{U} \oplus V \oplus \overline{V} \oplus W$ by transversality. This implies that $w = 0$ by the non-degeneracy of the above symplectic structure, so we conclude that the projection of $L \times_V L'$ onto $L' \circ L$ is injective. Then $L' \circ L$ has half the dimension of $\overline{U} \oplus W$ as well and is thus lagrangian.
\end{proof}
\begin{prop}\label{prop:lin-factor}
Suppose that $L \subset \overline{V} \oplus W$ is a lagrangian subspace, so that $C := pr_V(L)$ and $Y := pr_W(L)$ are coisotropic subspaces of $V$ and $W$ respectively. Then the symplectic vector spaces $C/C^\perp$ and $Y/Y^\perp$ are naturally symplectomorphic.
\end{prop}
\begin{proof}
We define a map $T: C \to Y/Y^\perp$ as follows: for $v \in C$, choose $w \in Y$ such that $(v,w) \in L$ and set $T(c) := [w]$. To see that this is well-defined, suppose that $(v,w), (v,w') \in L$ and let $z \in Y$. Choose $x \in C$ such that $(x,z) \in L$. Then
\[
0 = -\omega_V \oplus \omega_W((v,w),(x,z)) = -\omega_V(v,x) + \omega_W(w,z),
\]
and similarly $\omega_V(v,x) = \omega_W(w',z)$. Thus
\[
\omega_W(w-w',z) = \omega_W(w,z) - \omega_W(w',z) = \omega_V(v,x) - \omega_V(v,x) = 0.
\]
Thus $w-w' \in Y^\perp$, so $[w] = [w']$ and hence $T$ is well-defined; it is clearly linear.
The equation
\[
-\omega_V(v,v') + \omega_W(w,w') = 0 \text{ for } (v,w), (v,',w') \in L
\]
easily implies that $C^\perp = \ker T$, so we get an induced isomorphism
\[
T: C/C^\perp \to Y/Y^\perp.
\]
It is then straightforward to check that this is a symplectomorphism using the fact that $L \subset \overline{V} \oplus W$ is lagrangian.,
\end{proof}
\section{Smooth Relations}
Before moving on to canonical relations, we first consider the general theory of smooth relations.
\begin{defn}
A \emph{smooth relation} $R$ from a smooth manifold $M$ to a smooth manifold $N$ is a closed submanifold of the product $M \times N$. We will use the notation $R: M \to N$ to mean that $R$ is a smooth relation from $M$ to $N$. We will suggestively use the notation $R: m \mapsto n$ to mean that $(m,n) \in R$, and will think of relations as partially-defined, multi-valued functions. The \emph{transpose} of a smooth relation $R: M \to N$ is the smooth relation $R^t: N \to M$ defined by $(n,m) \in R^t$ if $(m,n) \in R$.
\end{defn}
\begin{rmk}
To make compositions easier to read, it may be better to draw arrows in the opposite direction and say that $R: N \leftarrow M$ is a smooth relation \emph{to} $N$ \emph{from} $M$; this is the approach used by Weinstein in \cite{W1}. We will stick with the notation above.
\end{rmk}
Given a smooth relation $R: M \to N$, the \emph{domain} of $R$ is
\[
\dom R := \{m \in M\ |\ \text{there exists $n \in N$ such that $(m,n) \in R$}\} \subseteq M.
\]
In other words, this is the domain of $R$ when we think of $R$ as a partially-defined, multi-valued function. Given a subset $U \subseteq M$, its \emph{image} under the smooth relation $R: M \to N$ is the set
\[
R(U) := \{n \in N\ |\ \text{there exists $m \in M$ such that $(m,n) \in R$}\} \subseteq N.
\]
In particular, we can speak of the image $R(m)$ of a point $m \in M$ and of the image $\im R := R(M)$ of $R$. The domain of $R$ can then be written as $\dom R = R^t(N) = \im R^t$.
We may now attempt to define a composition of smooth relations simply by using the usual composition of relations: given a smooth relation $R: M \to N$ and a smooth relation $R': N \to Q$, the composition $R' \circ R: M \to Q$ is defined to be
\[
R' \circ R := \{ (m,q) \in M \times Q\ |\ \text{there exists } n \in N \text{ such that } (m,n) \in R \text{ and } (n,q) \in R'\}.
\]
This is the same as taking the intersection of $R \times R'$ and $M \times \Delta_N \times Q$ in $M \times N \times N \times Q$, where $\Delta_N$ denotes the diagonal in $N \times N$, and projecting to $M \times Q$. However, we immediately run into the problem that the above composition need no longer be a smooth closed submanifold of $M \times Q$, either because the intersection of $R \times R'$ and $M \times \Delta_N \times Q$ is not smooth or because the projection to $M \times Q$ is ill-behaved, or both. To fix this, we introduce the following notions:
\begin{defn}
A pair $(R,R')$ of smooth relations $R: M \to N$ and $R': N \to Q$ is \emph{transversal} if the submanifolds $R \times R'$ and $M \times \Delta_N \times Q$ intersect transversally. The pair $(R,R')$ is \emph{strongly transversal} if it is transversal and in addition the projection of
\[
(R \times R') \cap (M \times \Delta_N \times Q)
\]
to $M \times Q$ is a proper embedding.
\end{defn}
As a consequence, for a strongly transversal pair $(R,R')$, the composition $R' \circ R$ is indeed a smooth relation from $M$ to $Q$.
\begin{defn}
A relation $R: M \to N$ is said to be:
\begin{itemize}
\item \emph{surjective} if for any $n \in N$ there exists $m \in M$ such that $(m,n) \in R$,
\item \emph{injective} if whenever $(m,n), (m',n) \in R$ we have $m=m'$,
\item \emph{cosurjective} if for any $m \in M$ there exists $n \in N$ such that $(m,n) \in R$,
\item \emph{coinjective} if whenever $(m,n), (m,n') \in R$ we have $n=n'$.
\end{itemize}
\end{defn}
Note that, if we think of relations as partially defined multi-valued functions, then cosurjective means ``everywhere defined'' and coinjective means ``single-valued''. Note also that $R$ is cosurjective if and only if $R^t$ is surjective and $R$ is coinjective if and only if $R^t$ is injective.
\begin{defn}
A smooth relation $R: M \to N$ is said to be a \emph{surmersion} if it is surjective and coinjective, the projection of $R$ to $M$ is a proper embedding, and the projection of $R$ to $N$ is a submersion; it is a \emph{cosurmersion} if $R^t: N \to M$ is a surmersion.
\end{defn}
It is a straightforward check to see that $R$ is a surmersion if and only if $R \circ R^t = id$ and hence a cosurmersion if and only if $R^t \circ R = id$. It is also straightforward to check that a pair $(R,R')$ is always strongly transversal if either $R$ is a surmersion or $R'$ a cosurmersion.
\section{Canonical Relations}
\begin{defn}
A \emph{canonical relation} $L: P \to Q$ from a symplectic manifold $P$ to a symplectic manifold $Q$ is a smooth relation which is lagrangian as a submanifold of $\overline{P} \times Q$.
\end{defn}
\begin{rmk}
The term ``lagrangian relation'' would perhaps be a better choice of words, and would fit in better with the terminology in \cite{WW}. Other sources use ``symplectic relation'' instead. The term ``canonical relation'' is motivated by Hamiltonian mechanics, where symplectomorphisms were classically called ``canonical transformations''.
\end{rmk}
\begin{ex}
As mentioned before, the graph of a symplectomorphism $f: P \to Q$ is a canonical relation $P \to Q$, which by abuse of notation we will also denote by $f$. In particular, given any symplectic manifold $P$, the graph of the identity map will be the canonical relation $id: P \to P$ given by the diagonal in $\overline P \times P$. More generally, the graph of a symplectic \'etale map is a canonical relation.
\end{ex}
\begin{ex}
For any manifold $M$, the \emph{Schwartz transform} on $T^*M$ is the canonical relation
\[
s: T^*M \to \overline{T^*M},\ (p,\xi) \mapsto (p,-\xi)
\]
given by multiplication by $-1$ in the fibers. Alternatively, it is the lagrangian submanifold of $T^*M \times T^*M \cong T^*(M \times M)$ given by the conormal bundle to the diagonal of $M \times M$.
\end{ex}
\begin{ex}
For any symplectic manifold $S$, a canonical relation $pt \to S$ or $S \to pt$ is nothing but a closed lagrangian submanifold of $S$.
\end{ex}
Here is a basic fact, which follows from Proposition \ref{prop:dom}:
\begin{prop}
Suppose that $L: P \to Q$ is a canonical relation with the projection of $L$ to $P$ of constant rank. Then the domain of $L$ is a coisotropic submanifold of $P$.
\end{prop}
The same result also shows---replacing $L$ by $L^t$---that the image of a canonical relation is coisotropic when the projection to $Q$ is a submersion.
We also have as a consequence of Proposition \ref{prop:comp} and of the previous discussion on composing smooth relations:
\begin{prop}
If $L: X \to Y$ and $L': Y \to Z$ are canonical relations with $(L,L')$ strongly transversal, then $L' \circ L$ is a canonical relation.
\end{prop}
In other words, the only obstacle to the composition of canonical relations being well-defined comes from smoothness issues and not from the requirement that the resulting submanifold be lagrangian. Later, we will discuss a way of getting around these smoothness complications.
\begin{rmk}
The composition of canonical relations is well-defined under weaker assumptions than strong transversality. In particular, assuming only transversality, Proposition \ref{prop:comp} implies that the composition will be an \emph{immersed} lagrangian submanifold; this also holds under a weaker \emph{clean intersection} hypothesis. In this thesis, however, we will only need to consider strong transversality.
\end{rmk}
\begin{rmk}
Note that, for general smooth relations $R: X \to Y$ and $R': Y \to Z$, transversality of $(R,R')$ does not necessarily imply that $R' \circ R$ is immersed---this is a key difference between smooth and canonical relations.
\end{rmk}
We note here that when $U \subset X$ is a lagrangian submanifold viewed as a canonical relation $pt \to X$ and the pair $(U,L)$ is strongly transversal, then the image $L(U)$ is just the composition
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (1) at (0,1) {$pt$};
\node (2) at (2,1) {$X$};
\node (3) at (4,1) {$Y$};
\tikzset{font=\scriptsize};
\draw[->] (1) to node [above] {$U$} (2);
\draw[->] (2) to node [above] {$L$} (3);
\end{tikzpicture}
\end{center}
and is a lagrangian submanifold of $Y$. In this way, a canonical relation $L: X \to Y$ induces a map from (a subset of) the set of lagrangian submanifolds of $X$ to the set of lagrangian submanifolds of $Y$.
\begin{defn}
A canonical relation $L: X \to Y$ is said to be a \emph{reduction} if, as a smooth relation, it is a surmersion; it is a \emph{coreduction} if it is a cosurmersion. We use $L: X \red Y$ to denote that $L$ is a reduction, and $L: X \cored Y$ to denote that $L$ is a coreduction.
\end{defn}
The use of the term ``reduction'' is motivated by the following example.
\begin{ex}(Symplectic Reduction)
Let $(M,\omega)$ be a symplectic manifold and $C$ a coisotropic submanifold. The distribution on $C$ given by $\ker\omega \subset TC$ is called the \emph{characteristic distribution} of $C$. It follows from $\omega$ being closed that $\ker\omega$ is integrable; the induced foliation $C^\perp$ on $C$ will be called the \emph{characteristic foliation} of $C$. If the leaf space $C/C^\perp$ is smooth and the projection $C \to C/C^\perp$ is a submersion, then $C/C^\perp$ is naturally symplectic and the relation
\[
red: M \red C/C^\perp
\]
assigning to an element of $C$ the leaf which contains it is a canonical relation which is a reduction in the sense above. This process will be called \emph{symplectic reduction}.
Symplectic reduction via Hamiltonian actions of Lie groups is a special case of the construction above. Indeed, suppose that $G$ acts properly and freely on $(P,\omega)$ in a Hamiltonian way with equivariant momentum map $\mu: P \to \g^*$. Then $C := \mu^{-1}(0)$ is a coisotropic submanifold of $P$ and the orbits of the induced $G$ action on $\mu^{-1}(0)$ are precisely the leaves of the characteristic foliation. Hence the reduction $C/C^\perp$ in the above sense is the quotient $\mu^{-1}(0)/G$.
\end{ex}
\begin{ex}
We also note two more well-known examples of symplectic reduction.\footnote{See, for example, \cite{BW}.} Suppose that $X$ is a smooth manifold with $Y \subseteq X$ a submanifold. Then the restricted cotangent bundle $T^*X|_Y$ is a coisotropic submanifold of $T^*X$ whose reduction is symplectomorhpic to $T^*Y$. Thus we obtain a reduction $T^*X \twoheadrightarrow T^*Y$.
As a generalization, suppose now that $\F$ is a regular foliation on $Y \subseteq X$ with smooth, Hausdorff leaf space $Y/\F$. Then the conormal bundle $N^*\F$ is a coisotropic submanifold of $T^*X$ (this is in fact equivalent to the distribution $T\F$ being integrable) and its reduction is canonically symplectomorphic to $T^*(Y/\F)$, giving rise to a reduction relation $T^*X \twoheadrightarrow T^*(Y/\F)$. The previous example is the case where $\F$ is the zero-dimensional foliation given by the points of $Y$.
\end{ex}
Any reduction can be expressed as the composition of a symplectic reduction as above with (the graph of) a symplectic \'etale map. More generally, it follows from Proposition~\ref{prop:lin-factor} that any canonical relation $\Lambda: X \to Y$ can be factored (modulo constant rank issues) into the composition of a reduction, followed by a symplectic \'etale map, followed by a coreduction:
\begin{equation}\label{factor}
\begin{tikzpicture}[>=angle 90,baseline=(current bounding box.center)]
\node (U1) at (0,1) {$X$};
\node (U2) at (3,1) {$Y$};
\node (L1) at (0,-1) {$X_{\dom\Lambda}$};
\node (L2) at (3,-1) {$Y_{\im\Lambda}$};
\tikzset{font=\scriptsize};
\draw[->] (U1) to node [above] {$\Lambda$} (U2);
\draw[->>] (U1) to node [left] {$red$} (L1);
\draw[>->] (L2) to node [right] {$cored$} (U2);
\draw[->] (L1) to node [above] {$\Lambda$} (L2);
\end{tikzpicture}
\end{equation}
where $X_{\dom\Lambda}$ denotes the reduction of $X$ by the coisotropic $\dom L$ and $Y_{\im\Lambda}$ the reduction of $Y$ by the coisotropic $\im\Lambda$.
\section{The Symplectic Category, Version I}
We are now ready to introduce the category we will be working in. We start with a preliminary definition, which we call a category even though, as explained above, compositions are not always defined.
\begin{defn}
The \emph{symplectic category} is the category $\Symp$ whose objects are symplectic manifolds and whose morphisms are canonical relations.
\end{defn}
Apart from the existence of compositions, it is easy to see that this indeed satisfies the other conditions of being a category. In particular identity morphisms are provided by diagonals and the associativity of composition follows from the usual identification
\[
X \times (Y \times Z) = (X \times Y) \times Z.
\]
To illustrate a use of the symplectic category, we prove the following:
\begin{prop}\label{prop:act}
Let $G$ be a Lie group acting smoothly on a symplectic manifold $P$ and let $\mu: P \to \g^*$ be a smooth, equivariant map with respect to the given action on $P$ and the coadjoint action on $\g^*$. Then the action of $G$ on $P$ is Hamiltonian with moment map $\mu$ if and only if the relation $T^*G \times P \to P$ given by
\[
((g,\mu(p)),p) \mapsto gp,
\]
where we have identified $T^*G$ with $G \times \g^*$ using left translations, is a canonical relation.
\end{prop}
\begin{proof}
First, it is simple to see that the submanifold $L := \{((g,\mu(p)),p,gp)\}$ has half the dimension of the ambient space $\overline{T^*G} \times \overline{P} \times P$ and that a tangent vector to $L$ looks like
\[
(X, d\mu_p V, V, dp_g X + dg_p V) \text{ where } X \in T_gG, V \in T_pP
\]
under the identification $T^*G \cong G \times \g^*$, and where (abusing notation) $p: G \to P$ and $g: P \to P$ respectively denote the maps
\[
h \mapsto hp \text{ and } q \mapsto gq.
\]
Then, the symplectic form on $\overline{T^*G} \times \overline{P} \times P$ acts as follows:
\begin{align*}
&-\omega_{T^*G} \oplus -\omega_P \oplus \omega_P((X, d\mu_p V, V, dp_g X + dg_p V),(X', d\mu_p V', V', dp_g X' + dg_p V')) \\
&\quad = -\omega_{T^*G}((X, d\mu_p V),(X',d\mu_p V')) - \omega_P(V,V') + \omega_P(dp_g X, dp_g X') \\
&\quad \quad + \omega_P(dp_g X, dg_p V') + \omega_P(dg_p V, dp_g X') + \omega_P(dg_p V, dg_p V') \\
&\quad = -\langle (dL_{g^{-1}})_gX, d\mu_p V' \rangle + \langle (dL_{g^{-1}})_gX', d\mu_p V \rangle - \langle (dL_{g^{-1}})_gX, d\mu'_p X' \rangle - \omega_P(V,V') \\
&\quad \quad + \omega_P(dp_g X, dp_g) + \omega_P(dp_g X, dg_p V') + \omega_P(dg_p V, dp_g X') + \omega_P(dg_p V, dg_p V'),
\end{align*}
where $L_{g^{-1}}: G \to G$ is left multiplication by $g^{-1}$ and $\mu': G \to \g^*$ is the map $h \mapsto h\cdot\mu(p)$ induced by the coadjoint action of $G$ on $\g^*$.
Now, if the action is Hamiltonian, then it is in particular symplectic so
\[
\omega_P(dg_p V, dg_p V') = \omega_P(V,V').
\]
Thus the above reduces to
\begin{align*}
&[-\langle (dL_{g^{-1}})_gX, d\mu_p V' \rangle + \omega_P(dp_g X, dg_p V')] + [\langle (dL_{g^{-1}})_gX', d\mu_p V \rangle - \omega_P(dp_g X', dg_p V)] \\
& \qquad\qquad\qquad\qquad\quad + [-\langle (dL_{g^{-1}})_gX, d\mu'_p X' \rangle + \omega_p(dp_g X, dp_g X')],
\end{align*}
and each of these vanishes by the moment map condition, where for the third term we use the fact that $\mu' = \mu \circ p$, which follows from the equivariance of $\mu$. Hence $L$ is lagrangian and so gives a canonical relation.
Conversely, suppose that $L$ is lagrangian. Then the above computation should produce zero, and setting $X' = 0$ and $V = 0$ gives the requirement that
\[
-\langle (dL_{g^{-1}})_gX, d\mu_p V' \rangle + \omega_P(dp_g X, dg_p V') = 0.
\]
This is precisely the moment map condition, and hence we conclude that the $G$-action is Hamiltonian with moment map $\mu$.
\end{proof}
The category $\Symp$ thus defined has additional rich structure. In particular, it is a \emph{monoidal category}, where the tensor operation is given by Cartesian product and the unit is given by the symplectic manifold consisting of a single point $pt$. Moreover, $\Symp$ is \emph{symmetric monoidal} and \emph{rigid}, where the dualizing operation is given by $X \mapsto \overline X$ on objects and $L \mapsto L^t$ on morphisms. In addition, if we allow the empty set as a symplectic manifold, then it is simple to check that $\emptyset$ is both an initial and terminal object in this category, and that the categorical product of symplectic manifolds $X$ and $Y$ is the disjoint union $X \sqcup Y$.
\section{The Symplectic Category, Version II}
Now we describe a method for getting around the lack of a well-defined composition of canonical relations in general, by using what are in a sense ``formal'' canonical relations:
\begin{defn}
A \emph{generalized lagrangian correspondence} from $X_0$ to $X_n$ is a chain
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (1) at (0,1) {$X_0$};
\node (2) at (2,1) {$X_1$};
\node (3) at (4,1) {$\cdots$};
\node (4) at (6,1) {$X_{n-1}$};
\node (5) at (8,1) {$X_n$};
\tikzset{font=\scriptsize};
\draw[->] (1) to node [above] {$L_0$} (2);
\draw[->] (2) to node [above] {$L_1$} (3);
\draw[->] (3) to node [above] {$L_{n-2}$} (4);
\draw[->] (4) to node [above] {$L_{n-1}$} (5);
\end{tikzpicture}
\end{center}
of canonical relations between intermediate symplectic manifolds. The composition of generalized lagrangian correspondences is given simply by concatenation. The \emph{Wehrheim-Woodward (symplectic) category} is the category whose morphisms are equivalence classes of generalized lagrangian correspondence under the equivalence relation generated by the requirement that
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (1) at (0,1) {$X$};
\node (2) at (2,1) {$Y$};
\node (3) at (4,1) {$Z$};
\tikzset{font=\scriptsize};
\draw[->] (1) to node [above] {$L_0$} (2);
\draw[->] (2) to node [above] {$L_1$} (3);
\end{tikzpicture}
\end{center}
be identified with $L_1 \circ L_0: X \to Z$ whenever the pair $(L_0,L_1)$ is strongly transversal.
\end{defn}
\begin{rmk}
Note that the same construction makes sense for general smooth relations.
\end{rmk}
The Wehrheim-Woodward category is an honest category, used in \cite{WW} to define \emph{quilted Floer homology}. The following result of Weinstein simplifies the types of correspondences which need to be considered when working with this category:
\begin{thrm}[Weinstein, \cite{W1}]
Any generalized lagrangian correspondence
\[
X_0 \to \cdots \to X_n
\]
is equivalent to a two-term lagrangian correspondence $X_0 \cored Y \red X_n$, where the first relation can be taken to be a coreduction and the second a reduction.
\end{thrm}
All compositions that we consider in this thesis will actually be strongly transversal, so that we need not use the full language of generalized lagrangian correspondences. To be precise, we will technically work in the Wehrheim-Woodward category but will only consider single-term correspondences. It would be interesting to know how and if our results generalize to the full Wehrheim-Woodward category.
\section{The Cotangent Functor}
We define a functor $T^*: \Man \to \Symp$, called the \emph{cotangent functor}, as follows. First, $T^*$ assigns to a smooth manifold its cotangent bundle. To a smooth map $f: M \to N$, $T^*$ assigns the canonical relation $T^*f: T^*M \to T^*N$ given by
\[
T^*f: (p,df_p^*\xi) \mapsto (f(p),\xi).
\]
This is nothing but the composition $T^*M \to \overline{T^*M} \to T^*N$ of the Schwartz transform of $T^*M$ followed by the canonical relation given by the conormal bundle to the graph of $f$ in $M \times N$. We call $T^*f$ the \emph{cotangent lift} of $f$. It is a simple check to see that pairs of cotangent lifts are always strongly transversal and that $T^*$ really is then a functor: i.e. $T^*(f \circ g) = T^*f \circ T^*g$ and $T^*(id) = id$. Note also that the same construction makes sense even when $f$ is only a smooth relation.
\begin{ex}
When $\phi: M \to N$ is a diffeomorphism, $T^*\phi: T^*M \to T^*N$ is precisely the graph of the lifted symplectomorphism.
\end{ex}
The following is easy to verify.
\begin{prop}
The cotangent lift of $f$ is a reduction if and only if $f$ is a surmersion; the cotangent lift of $g$ is a coreduction if and only if $g$ is cosurmersion.
\end{prop}
Much of this thesis was motivated by the process of applying this functor to various structures arising in $\Man$ and studying the resulting structures in $\Symp$; in particular, in Chapter~\ref{chap:dbl-grpds} we describe the type of structure arising from applying $T^*$ to a Lie groupoid. For now, let us compute the cotangent lift of a group action, which motivates the canonical relation used in Proposition \ref{prop:act}:
\begin{prop}{\label{prop:action-lift}}
Let $\tau: G \times M \to M$ be a smooth action of a Lie group $G$ on a smooth manifold $M$. Then the cotangent lift $T^*\tau$ is given by
\[
((g,\mu(p,\xi)),(p,\xi)) \mapsto g(p,\xi),
\]
where $g(p,\xi)$ denotes the lifted cotangent action of $G$ on $T^*M$, $\mu$ is its standard momentum map: $\mu(p,\xi) = dp_e^*\xi$, and we have identified $T^*G$ with $G \times \g^*$ using left translations.
\end{prop}
\begin{proof}
The cotangent lift $T^*\tau: T^*G \times T^*M \to T^*M$ is given by
\[
((g,p),d\tau_{(g,p)}^*\xi) \mapsto (gp,\xi).
\]
Abusing notation, for any $g \in G$ and $p \in P$ we denote the maps
\[
P \to P,\ q \mapsto gq \text{ and } G \to P,\ h \mapsto hp
\]
by $g$ and $p$ respectively. Then using the fact that
\[
d\tau_{(g,p)} = dp_g \circ pr_1 + dg_p \circ pr_2,
\]
we can write $T^*\tau$ as
\[
((g,dp_g^*\xi),(p,dg_p^*\xi)) \mapsto (gp,\xi),
\]
which is then
\[
((g,dp_g^*(dg^{-1}_{gp})^*\eta,(p,\eta)) \mapsto g(p,\eta)
\]
where $\eta = dg_p^*\xi$ and $g(p,\eta)$ now denotes the lifted cotangent action of $G$ on $T^*M$.
Now, letting $L_g$ denote left multiplication on $G$ by $g$, we have
\begin{align*}
(dL_g)_e^*dp_g^*(dg^{-1}_{gp})^*\eta &= d(g^{-1} \circ p \circ L_g)_e^*\eta \\
&= dp_e^*\eta.
\end{align*}
Thus identifying $T^*G$ with $G \times \g^*$ using left translations:
\[
(g,\gamma) \mapsto (g,(dL_g)_e^*\gamma),
\]
we get that $T^*\tau$ is given by the desired expression.
\end{proof}
\section{Symplectic Monoids and Comonoids}
We can now use the symplectic category to provide simple, ``categorical'' descriptions of various objects encountered in symplectic geometry; in particular, symplectic groupoids and their Hamiltonian actions.
Since $\Symp$ is monoidal, we can speak about \emph{monoid objects} in $\Symp$:
\begin{defn}
A \emph{symplectic monoid} is a monoid object in $\Symp$. Thus, a symplectic monoid is a triple $(S,m,e)$ consisting of a symplectic manifold $S$ together with canonical relations
\[
m: S \times S \to S \text{ and } e: pt \to S,
\]
called the \emph{product} and \emph{unit} respectively, so that
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (UL) at (0,1) {$S \times S \times S$};
\node (UR) at (3,1) {$S \times S$};
\node (LL) at (0,-1) {$S \times S$};
\node (LR) at (3,-1) {$P$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$id \times m$} (UR);
\draw[->] (UL) to node [left] {$m \times id$} (LL);
\draw[->] (UR) to node [right] {$m$} (LR);
\draw[->] (LL) to node [above] {$m$} (LR);
\end{tikzpicture}
\end{center}
and
\begin{center}
\begin{tikzpicture}[thick]
\node (UL) at (0,1) {$S$};
\node (UR) at (3,1) {$S \times S$};
\node (URR) at (6,1) {$S$};
\node (LR) at (3,-1) {$S$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$e \times id$} (UR);
\draw[->] (UR) to node [right] {$m$} (LR);
\draw[->] (URR) to node [above] {$id \times e$} (UR);
\draw[->] (UL) to node [above] {$id$} (LR);
\draw[->] (URR) to node [above] {$id$} (LR);
\end{tikzpicture}
\end{center}
commute. We also require that all compositions involved be strongly transversal. The first diagram says that $m$ is ``associative'' and the second says that $e$ is a ``left'' and ``right unit''. We often refer to $S$ itself as a symplectic monoid, and use subscripts in the notation for the structure morphisms if we need to be explicit.
\end{defn}
The main example of such a structure is the following:
\begin{ex}
Let $S \rightrightarrows P$ be a symplectic groupoid. Then $S$ together with the groupoid multiplication thought of as a relation $S \times S \to S$ and the canonical relation $pt \to S$ given by the image of the unit embedding $P \to S$ is a symplectic monoid.
\end{ex}
In fact, Zakrzewski gave in \cite{SZ1}, \cite{SZ2} a complete characterization of symplectic groupoids in terms of such structures. Let us recall his description. First, the base space of the groupoid is the lagrangian submanifold $E$ of $S$ giving the unit morphism
\[
e: pt \to S,\ E := e(pt).
\]
The associativity of $m$ and unit properties of $e$ together then imply that there are unique maps $\ell, r: S \to E$ such that
\[
m(\ell(s),s) \ne \emptyset \ne m(s,r(s)) \text{ for all $s$}.
\]
These maps will form the target and source maps of the sought after symplectic groupoid structure.
The above is not enough to recover the symplectic groupoid yet; in particular, the above conditions do not imply that the product $m$ must be single-valued as we would need for a groupoid product. We need one more piece of data and an additional assumption:
\begin{defn}
A \emph{*-structure} on a symplectic monoid $S$ is an anti-symplectomorphism $s: S \to S$ (equivalently a symplectomorphism $s: \overline S \to S$) such that $s^2 = id$ and the diagram
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (UL) at (0,1) {$\overline{S} \times \overline{S}$};
\node (U) at (3,1) {$\overline{S} \times \overline{S}$};
\node (UR) at (6,1) {$S \times S$};
\node (LL) at (0,-1) {$\overline{S}$};
\node (LR) at (6,-1) {$S,$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$\sigma$} (U);
\draw[->] (U) to node [above] {$s \times s$} (UR);
\draw[->] (UL) to node [left] {$m$} (LL);
\draw[->] (UR) to node [right] {$m$} (LR);
\draw[->] (LL) to node [above] {$s$} (LR);
\end{tikzpicture}
\end{center}
where $\sigma$ is the symplectomorphism exchanging components, commutes. A symplectic monoid equipped with a $*$-structure will be called a \emph{symplectic $*$-monoid}.
A $*$-structure $s$ is said to be \emph{strongly positive}\footnote{This terminology is motivated by quantization, where it is viewed as the analog of the algebraic positivity condition: ``$aa^* > 0$''.} if the diagram
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (UL) at (0,1) {$S \times \overline{S}$};
\node (UR) at (3,1) {$S \times S$};
\node (LL) at (0,-1) {$pt$};
\node (LR) at (3,-1) {$S,$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$id \times s$} (UR);
\draw[->] (LL) to node [left] {$$} (UL);
\draw[->] (UR) to node [right] {$m$} (LR);
\draw[->] (LL) to node [above] {$e$} (LR);
\end{tikzpicture}
\end{center}
where $pt \to S \times \overline{S}$ is the morphism given by the diagonal of $S \times \overline{S}$, commutes.
\end{defn}
\begin{thrm}[Zakrzewski, \cite{SZ1}\cite{SZ2}]
Symplectic groupoids are in $1$-$1$ correspondence with strongly positive symplectic $*$-monoids.
\end{thrm}
\begin{rmk}
There is a similar characterization of Lie groupoids as strongly positive $*$-monoids in the category of smooth relations.
\end{rmk}
It is unclear precisely what general symplectic monoids correspond to; in particular, as mentioned before, the product is then not necessarily single-valued.
We also note that a strongly positive $*$-structure which produces out of a symplectic monoid $(S,m,e)$ a symplectic groupoid is in fact unique if it exists: the lagrangian submanifold of $S \times S$ which gives this $*$-structure must equal the composition
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (1) at (0,1) {$pt$};
\node (2) at (2,1) {$S$};
\node (3) at (4,1) {$S \times S.$};
\tikzset{font=\scriptsize};
\draw[->] (1) to node [above] {$e$} (2);
\draw[->] (2) to node [above] {$m^t$} (3);
\end{tikzpicture}
\end{center}
Indeed, the $*$-structure is nothing but the inverse of the corresponding symplectic groupoid. Thus, such a structure is not really extra data on a symplectic monoid, but can rather be thought of as an extra condition the monoid structure itself must satisfy. Still, we will continue to refer directly to the strongly positive $*$-structure on a symplectic groupoid to avoid having to reconstruct it from the monoid data.
\begin{ex}
Recall that the cotangent bundle of a groupoid has a natural symplectic groupoid structure. As a specific case of the previous example, let us explicitly spell out the symplectic monoid structure on $T^*G$ for a Lie group $G$. The product
\[
T^*G \times T^*G \to T^*G
\]
is obtained by applying the cotangent functor to the usual product $G \times G \to G$; explicitly, this is the relation
\[
((g,(dR_h)_g^*\xi),(h,(dL_g)_h^*\xi)) \mapsto (gh,\xi).
\]
The unit $pt \to T^*G$ is given by the lagrangian submanifold $\g^*$ of $T^*G$ and is obtained by applying $T^*$ to the inclusion $pt \to G$ of the identity element. Finally, the $*$-structure is the symplectomorphism $\overline{T^*G} \to T^*G$ given by
\[
(g,-di_g^*\xi) \mapsto (g^{-1},\xi),
\]
where $i: G \to G$ is inversion, and can be obtained as the composition $T^*G \to \overline{T^*G} \to T^*G$ of the Schwartz transform of $T^*G$ followed by the cotangent lift $T^*i$.
\end{ex}
Reversing the arrows in the above definition leads to the notion of a \emph{symplectic comonoid}; we will call the structure morphisms of a symplectic comonoid the \emph{coproduct} and \emph{counit}, and will denote them by $\Delta$ and $\varepsilon$ respectively. Similarly, one can speak of a (strongly positive) $*$-structure on a symplectic comonoid.
\begin{ex}\label{ex:std-com}
Let $M$ be a manifold. Then $T^*M$ has a natural symplectic $*$-comonoid structure, obtained by reversing the arrows in its standard symplectic groupoid structure. To be explicit, the coproduct $T^*M \to T^*M \times T^*M$ is
\[
\Delta: (p,\xi+\eta) \mapsto ((p,\xi),(p,\eta)),
\]
which is obtained as the cotangent lift of the standard diagonal map $M \to M \times M$, and the counit $\varepsilon: T^*M \to pt$ is given by the zero section and is obtained as the cotangent lift of the canonical map $M \to pt$. The $*$-structure is the Schwartz transform.
\end{ex}
Canonical relations which preserve (co)monoid structures are referred to as (co)monoid morphisms:
\begin{defn}
A \emph{monoid morphism} between symplectic monoids $P$ and $Q$ is a canonical relation $L: P \to Q$ such that
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (UL) at (0,1) {$P \times P$};
\node (UR) at (3,1) {$Q \times Q$};
\node (LL) at (0,-1) {$P$};
\node (LR) at (3,-1) {$Q$};
\node at (5,0) {$\text{ and }$};
\node (UL2) at (7,1) {$P$};
\node (UR2) at (10,1) {$Q$};
\node (LL2) at (8.5,-1) {$pt$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$L \times L$} (UR);
\draw[->] (UL) to node [left] {$m_P$} (LL);
\draw[->] (UR) to node [right] {$m_Q$} (LR);
\draw[->] (LL) to node [above] {$L$} (LR);
\draw[->] (UL2) to node [above] {$L$} (UR2);
\draw[->] (LL2) to node [left] {$e_P$} (UL2);
\draw[->] (LL2) to node [right] {$e_Q$} (UR2);
\end{tikzpicture}
\end{center}
commute. When the first diagram commutes, we say that $L$ \emph{preserves products}, and when the second commutes, we say that $L$ \emph{preserves units}. Reversing the arrows in the above diagrams leads to the notion of a \emph{comonoid morphism} between symplectic comonoids, and we speak of a canonical relation \emph{preserving coproducts} and \emph{preserving counits}.
\end{defn}
\begin{prop}
The cotangent lift $T^*f: T^*X \to T^*Y$ of a map $f: X \to Y$ is a comonoid morphism with respect to the standard cotangent comonoid structures.
\end{prop}
\begin{proof}
First, the composition $\Delta_Y \circ T^*f$ looks like
\[
(p,df_p^* \xi) \mapsto (f(p),\xi) \mapsto ((f(p),\xi_1),(f(p),\xi_2))
\]
where $\xi = \xi_1 + \xi_2$. The composition $(T^*f \times T^*f) \circ \Delta_X$ looks like
\[
(p,\eta) \mapsto ((p,\eta_1),(p,\eta_2)) \mapsto ((f(p),\gamma_1),(f(p),\gamma_2))
\]
where $\eta_i = df_p^*\gamma_i$ and $\eta = \eta_1 + \eta_2 = df_p^*\gamma_1+df_p^*\gamma_2$. Hence these two compositions agree by the linearity of $df_p^*$.
Similarly, the composition $\varepsilon_Y \circ T^*f$ is
\[
(p,df_p^*\xi) \mapsto (f(p),\xi) \mapsto pt
\]
where $\xi = 0$. Then $df_p^*\xi$ is also $0$ so the composition is $\varepsilon_X$ as required.
\end{proof}
Moreover, as shown in \cite{SZ2}, the above proposition in fact completely characterizes those canonical relations between cotangent bundles which are cotangent lifts.
After considering monoids in $\Symp$, it is natural to want to consider group objects. However, we soon run into the following problem: the diagrams required of a group object $G$ in a category make use of a ``diagonal'' morphism $G \to G \times G$ and a morphism $G \to pt$, but there are no canonical choices for such morphisms in the symplectic category. Indeed, such structures should rather be thought of as coming from a comonoid structure on $G$, and from this point of view the notion of a ``group object'' gets replaced by that of a ``Hopf algebra object'':
\begin{defn}
A \emph{Hopf algebra object} in $\Symp$ consists of a symplectic manifold $S$ together with
\begin{itemize}
\item a symplectic monoid structure $(S,m,e)$,
\item a symplectic comonoid structure $(S,\Delta,\varepsilon)$, and
\item a symplectomorphism $i: S \to S$
\end{itemize}
such that the following diagrams commute, with all compositions strongly transversal:
\begin{itemize}
\item (compatibility between product and coproduct)
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (U1) at (0,1) {$S \times S$};
\node (U2) at (3,1) {$S$};
\node (U3) at (6,1) {$S \times S$};
\node (L1) at (0,-1) {$S \times S \times S \times S$};
\node (L3) at (6,-1) {$S \times S \times S \times S$};
\tikzset{font=\scriptsize};
\draw[->] (U1) to node [above] {$m$} (U2);
\draw[->] (U2) to node [above] {$\Delta$} (U3);
\draw[->] (U1) to node [left] {$\Delta \times \Delta$} (L1);
\draw[->] (L3) to node [right] {$m \times m$} (U3);
\draw[->] (L1) to node [above] {$id \times \sigma \times id$} (L3);
\end{tikzpicture}
\end{center}
where $\sigma: S \times S \to S \times S$ is the symplectomorphism exchanging components,
\item (compatibilities between product and counit, between coproduct and unit, and between unit and counit respectively)
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (U1) at (0,1) {$S \times S$};
\node (U2) at (3,1) {$S$};
\node (L1) at (1.5,-1) {$pt$};
\node at (3,0) {$\text{,}$};
\node (U3) at (4,1) {$S$};
\node (U4) at (7,1) {$S \times S$};
\node (L2) at (5.5,-1) {$pt$};
\node at (8,0) {, and };
\node (U5) at (10.5,1) {$S$};
\node (L5) at (9,-1) {$pt$};
\node (L6) at (12,-1) {$pt$};
\tikzset{font=\scriptsize};
\draw[->] (U1) to node [above] {$m$} (U2);
\draw[->] (U1) to node [left] {$\varepsilon \times \varepsilon$} (L1);
\draw[->] (U2) to node [right] {$\varepsilon$} (L1);
\draw[->] (U3) to node [above] {$\Delta$} (U4);
\draw[->] (L2) to node [left] {$e$} (U3);
\draw[->] (L2) to node [right] {$e \times e$} (U4);
\draw[->] (L5) to node [left] {$e$} (U5);
\draw[->] (U5) to node [right] {$\varepsilon$} (L6);
\draw[->] (L5) to node [above] {$id$} (L6);
\end{tikzpicture}
\end{center}
\item (antipode conditions)
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (U1) at (0,1) {$S \times S$};
\node (U3) at (4,1) {$S \times S$};
\node (L1) at (0,-1) {$S$};
\node (L2) at (2,-1) {$pt$};
\node (L3) at (4,-1) {$S$};
\node at (5.5,0) {$\text{and}$};
\node (U4) at (7,1) {$S \times S$};
\node (U6) at (11,1) {$S \times S$};
\node (L4) at (7,-1) {$S$};
\node (L5) at (9,-1) {$pt$};
\node (L6) at (11,-1) {$S$};
\tikzset{font=\scriptsize};
\draw[->] (U1) to node [above] {$id \times i$} (U3);
\draw[->] (L1) to node [left] {$\Delta$} (U1);
\draw[->] (U3) to node [right] {$m$} (L3);
\draw[->] (L1) to node [above] {$\varepsilon$} (L2);
\draw[->] (L2) to node [above] {$e$} (L3);
\draw[->] (U4) to node [above] {$i \times id$} (U6);
\draw[->] (L4) to node [left] {$\Delta$} (U4);
\draw[->] (U6) to node [right] {$m$} (L6);
\draw[->] (L4) to node [above] {$\varepsilon$} (L5);
\draw[->] (L5) to node [above] {$e$} (L6);
\end{tikzpicture}
\end{center}
\end{itemize}
\end{defn}
\begin{ex}\label{ex:hopf}
Equip the cotangent bundle $T^*G$ of a Lie group $G$ with the comonoid structure of Example \ref{ex:std-com} and the monoid structure coming from its symplectic groupoid structure over $\g^*$. Then these two structures together with the ``antipode'' $T^*i$, where $i: G \to G$ is inversion, make $T^*G$ into a Hopf algebra object in the symplectic category.
Note that we have the same structure on the cotangent bundle of a more general Lie groupoid, but this will not form a Hopf algebra object; in particular, the antipode conditions fail owing to the fact that groupoid product and is not defined on all of $G \times G$.
\end{ex}
We will return to such structures, and generalizations, in the next chapter.
\section{Actions in the Symplectic Category}
We now turn to Hamiltonian actions of symplectic groupoids. As in any monoidal category, we can define the notion of an action of a monoid object:
\begin{defn}
Let $S$ be a symplectic monoid. An \emph{action} of $S$ on a symplectic manifold $Q$ in the symplectic category is a canonical relation $\tau: S \times Q \to Q$ so that the diagrams
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (UL) at (0,1) {$S \times S \times Q$};
\node (UR) at (3,1) {$S \times Q$};
\node (LL) at (0,-1) {$S \times Q$};
\node (LR) at (3,-1) {$Q$};
\node at (5,0) {$\text{and}$};
\node (UL2) at (7,1) {$P$};
\node (UR2) at (10,1) {$S \times Q$};
\node (LR2) at (8.5,-1) {$Q,$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$id \times \tau$} (UR);
\draw[->] (UL) to node [left] {$m \times id$} (LL);
\draw[->] (UR) to node [right] {$\tau$} (LR);
\draw[->] (LL) to node [above] {$\tau$} (LR);
\draw[->] (UL2) to node [above] {$e \times id$} (UR2);
\draw[->] (UR2) to node [right] {$\tau$} (LR2);
\draw[->] (UL2) to node [left] {$id$} (LR2);
\end{tikzpicture}
\end{center}
which say that $\tau$ is compatible with the product and unit of $S$ respectively, commute. We again require that all compositions above be strongly transversal.
\end{defn}
\begin{ex}
The cotangent lift $T^*\tau: T^*G \times T^*M \to T^*M$ of Proposition \ref{prop:action-lift} defines an action in the symplectic category. The diagrams in the above definition commute simply because $T^*$ preserves commutative diagrams. As we will see, this action completely encodes the induced lifted Hamiltonian action of $G$ on $T^*M$.
\end{ex}
As a generalization of this example, suppose that we have a Hamiltonian action of $G$ on $P$ with equivariant momentum map $\mu: P \to \g^*$. Then the relation
\[
((g,\mu(p)),p) \mapsto gp
\]
defines an action $T^*G \times P \to P$ of $T^*G$ on $P$ in the symplectic category. This is essentially the content of Proposition \ref{prop:act}. To be precise, given smooth maps $G \times P \to P$ and $\mu: P \to \g^*$, the above relation defines an action in $\Symp$ if and only if $G \times P \to P$ is a Hamiltonian action with momentum map $\mu$; the diagrams which say that $T^*G \times P \to P$ is an action are equivalent to $G \times P \to P$ being an action and $\mu$ being equivariant, and the condition that $T^*G \times P \to P$ be a canonical relation is equivalent to $\mu$ being a momentum map.
This observation generalizes as follows:
\begin{thrm}
A Hamiltonian action of a symplectic groupoid $S \rightrightarrows P$ is the same as an action in the symplectic category of the corresponding symplectic monoid.
\end{thrm}
\begin{proof}
Suppose that $S \rightrightarrows P$ acts in a Hamiltonian way on a symplectic manifold $Q$. Then the graph of the action map $\tau: S \times_P Q \to Q$ is a lagrangian submanifold of $\overline{S} \times \overline{Q} \times Q$. Viewing this as a canonical relation $S \times Q \to Q$, it is then straightforward to check that this defines an action of $S$ on $Q$ in the symplectic category.
Conversely, suppose that $\tau: S \times Q \to Q$ is an action of $S$ on $Q$ in the symplectic category, so that the following diagrams commute:
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (UL) at (0,1) {$S \times S \times Q$};
\node (UR) at (3,1) {$S \times Q$};
\node (LL) at (0,-1) {$S \times Q$};
\node (LR) at (3,-1) {$Q$};
\node at (5,0) {$\text{and}$};
\node (UL2) at (7,1) {$Q$};
\node (UR2) at (10,1) {$S \times Q$};
\node (LR2) at (8.5,-1) {$Q.$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$id \times \tau$} (UR);
\draw[->] (UL) to node [left] {$m \times id$} (LL);
\draw[->] (UR) to node [right] {$\tau$} (LR);
\draw[->] (LL) to node [above] {$\tau$} (LR);
\draw[->] (UL2) to node [above] {$e \times id$} (UR2);
\draw[->] (UR2) to node [right] {$\tau$} (LR2);
\draw[->] (UL2) to node [left] {$id$} (LR2);
\end{tikzpicture}
\end{center}
We first extract a moment map $J: Q \to P$. Given $q \in Q$, the commutativity of the second diagram implies that there exists $p \in P$ such that
\[
\tau: (p,q) \mapsto q.
\]
Suppose that $p, p' \in P$ both have this property. Then $(p,p',q,q)$ lies in the composition $\tau \circ (id \times \tau)$ of the first diagram, whose commutativity then requires that this also lie in the composition $\tau \circ (m \times id)$. In particular, this requires that $(p,p')$ lie in the domain of $m$; since $m$ is the product of a symplectic groupoid structure on $S$, this forces $p = p'$ since the only way two units of a groupoid are composable is when they are the same. Thus, the element $p \in P$ such that $\tau: (p,q) \mapsto q$ is unique, and we define $J(q)$ to be this element.
We now claim that the action $\tau: S \times Q \to Q$ is single-valued. Indeed, suppose that
\[
\tau: (s,q) \mapsto q' \text{ and } \tau: (s,q) \mapsto q''.
\]
The associativity of $\tau$ then implies that
\[
\tau: (s^{-1},q') \mapsto q \text{ and } (s^{-1},q'') \mapsto q
\]
where $s^{-1}$ is the inverse of $s$ under the symplectic groupoid structure on $S$. On the one hand, the composition $\tau \circ (m \times id)$ in the first diagram above then gives
\[
\tau \circ (m \times id): (s,s^{-1},q') \mapsto (J(q'),q') \mapsto q',
\]
Note in particular that this is the only possible element an element of the form $(s,s^{-1},q')$ can map to under this composition. On the other hand, the composition $\tau \circ (id \times \tau)$ gives
\[
\tau \circ (id \times \tau): (s,s^{-1},q') \mapsto (s,q) \mapsto q''.
\]
Thus since these two compositions agree, we must have $q' = q''$, so $\tau$ is single-valued as claimed.
It is then straightforward to check that $\tau: S \times Q \to Q$ together with the moment map $J: Q \to P$ defines a Hamiltonian action of $S \rightrightarrows P$ on $Q$.
\end{proof}
In particular, the example of the cotangent lift of a group action has the following generalization:
\begin{prop}\label{prop:lifted-cot-action}
Suppose that a groupoid $G$ acts on a manifold $N$, and consider the map defining the action as a relation $\tau: G \times N \to N$. Then the cotangent lift $T^*\tau: T^*G \times T^*N \to T^*N$ defines an action in the symplectic category and hence an action of the symplectic groupoid $T^*G \rightrightarrows A^*$ on $T^*N$.
\end{prop}
\section{Fiber Products}
As we have seen, the symplectic category has many nice properties and applications. However, it has some problems (apart from the problem of having well-defined compositions) as well. In particular, consider the following setup.
Suppose that $f,g: M \to N$ are surjective submersions. Then we have the fiber product diagram
\begin{center}
\begin{tikzpicture}[thick]
\node (UL) at (0,1) {$M_1 \times_N M_2$};
\node (UR) at (3,1) {$M_2$};
\node (LL) at (0,-1) {$M_1$};
\node (LR) at (3,-1) {$N$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$pr_2$} (UR);
\draw[->] (UL) to node [left] {$pr_1$} (LL);
\draw[->] (UR) to node [right] {$g$} (LR);
\draw[->] (LL) to node [above] {$f$} (LR);
\end{tikzpicture}
\end{center}
in $\Man$. Applying the cotangent functor produces the diagram
\begin{center}
\begin{tikzpicture}[thick]
\node (UL) at (0,1) {$T^*(M_1 \times_N M_2)$};
\node (UR) at (4,1) {$T^*M_2$};
\node (LL) at (0,-1) {$T^*M_1$};
\node (LR) at (4,-1) {$T^*N$};
\tikzset{font=\scriptsize};
\draw[->] (UL) to node [above] {$T^*pr_2$} (UR);
\draw[->] (UL) to node [left] {$T^*pr_1$} (LL);
\draw[->] (UR) to node [right] {$T^*g$} (LR);
\draw[->] (LL) to node [above] {$T^*f$} (LR);
\end{tikzpicture}
\end{center}
in the symplectic category. In this section, we are interested in the extent to which this can be viewed as a ``fiber-product'' diagram in $\Symp$. The immediate motivation for considering this is the case where $f$ and $g$ are the source and target of a Lie groupoid, in which case the fiber product $G \times_M G$ consists of the pairs of composable arrows.
To naively construct the fiber product of the canonical relations $T^*f$ and $T^*g$ above, we might try to proceed as in the case of actual maps and form
\[
T^*M_1 \times^{naive}_{T^*N} T^*M_2 := \{((p,\xi),(q,\eta)) \in T^*M_1 \times T^*M_2\ |\ T^*f(p,\xi) = T^*g(q,\eta)\}.
\]
(Recall that if $f,g$ are surjective submersions, $T^*f$ and $T^*g$ are reductions so that they are, in particular, single-valued.) The condition above can equivalently be expressed as
\[
T^*M_1 \times^{naive}_{T^*N} T^*M_2 = (T^*g)^t \circ (T^*f).
\]
Note that, since $T^*f$ is a reduction, this composition is strongly transversal and so the result is a smooth manifold.
The question is now whether or not this manifold is isomorphic to $T^*(M_1 \times_N M_2)$. One case in which this indeed works is when $T^*f$ and $T^*g$ are actual maps:
\begin{prop}
Consider the category whose objects are symplectic manifolds and morphisms are symplectic maps. Then the above diagram is a fiber product diagram in this category.
\end{prop}
\begin{proof}
When considering only symplectic maps, the above ``naive'' fiber product is an actual fiber product, so we need only show that it is isomorphic to $T^*(M_1 \times_N M_2)$.
Consider the inclusion of the naive fiber product into $\overline{T^*M_1} \times T^*M_2$:
\[
(T^*g)^t \circ (T^*f) \hookrightarrow \overline{T^*M_1} \times T^*M_2.
\]
Composing this with the canonical relations
\[
\overline{T^*M_1} \times T^*M_2 \to T^*(M_1 \times M_2) \red T^*(M_1 \times_N M_2),
\]
where the first is induced by the Schwartz transform of $T^*M_1$ and the second is obtained by reducing the coisotropic $T^*(M_1 \times M_2)|_{M_1 \times_N M_2}$, gives a smooth relation
\[
(T^*g)^t \circ (T^*f) \to T^*(M_1 \times_N M_2).
\]
It is then simple to check that this relation is actually a map, and indeed bijective. Thus $T^*(M_1 \times_N M_2)$ is the fiber product of $T^*f$ and $T^*g$ in the category of symplectic maps.
\end{proof}
However, the situation is not so good in general. First, the composition
\[
(T^*g)^t \circ (T^*f) \to T^*(M_1 \times_N M_2)
\]
of the above proposition, while still a map, is only an inclusion in general. Second, the naive fiber product $(T^*g)^t \circ (T^*f)$ does not have an obvious symplectic structure and so is not necessarily an object in $\Symp$.
Instead, we can abandon the idea of trying to use the naive fiber product as above, and ask whether $T^*(M_1 \times_N M_2)$ is still the correct fiber product in the symplectic category. One quickly realizes again that this is not the case, and that fiber products do not exist in the symplectic category in general except under special circumstances---say the fiber product of symplectomorphisms. Indeed, one can check that even the simplest type of canonical relations---morphisms to a point--do not admit a fiber product. This is true even for the category of set-theoretic relations between sets; in other words, this is a drawback of working with relations in general and is not unique to smooth nor canonical relations.
This lack of fiber products in general, and in particular in the Lie groupoid case mentioned above, was one issue that led to some of the ideas considered in the next chapter. Another idea, to which we return in Chapter~\ref{chap:stacks}, is to dispense of fiber products altogether and instead work with simplicial objects.
\section{Simplicial Symplectic Manifolds}
\begin{defn}
A \emph{simplicial symplectic manifold} is a simplicial object in the symplectic category, i.e. a functor
\[
P: \Delta^{op} \to \textbf{Symp},
\]
where $\Delta$ is the category whose objects are sets $[n] := \{0,1,\ldots,n\}$ and morphisms $[n] \to [m]$ are order-preserving maps.
\end{defn}
In concrete terms, the above definition boils down to the following: for each $n \ge 0$ we have a symplectic manifold $P_{n} := P([n])$, canonical relations
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{1.4};
\def\C{2.5};
\node at (0,0) {$\cdots$};
\node at (1.1,0) {$P_2$};
\node at (2.2,0) {$P_1$};
\node at (3.3,0) {$P_0$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
called the \emph{face} morphisms, and canonical relations
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{1.4};
\def\C{2.5};
\node at (0,0) {$\cdots$};
\node at (1.1,0) {$P_2$};
\node at (2.2,0) {$P_1$};
\node at (3.3,0) {$P_0$};
\tikzset{font=\scriptsize};
\draw[<-] (\A,+.12) -- (\A+.5,0+.12);
\draw[<-] (\A,0) -- (\A+.5,0);
\draw[<-] (\A,-.12) -- (\A+.5,0-.12);
\draw[<-] (\B,+.06) -- (\B+.5,+.06);
\draw[<-] (\B,-.06) -- (\B+.5,-.06);
\draw[<-] (\C,0) -- (\C+.5,0);
\end{tikzpicture}
\end{center}
called the \emph{degeneracy} morphisms, so that the following holds (which we recall from \cite{CZ}): denoting the face morphisms coming out of $P_n$ by $d^n_i$ and the degeneracy morphisms coming out of $P_n$ by $s^n_i$, we require that
\[
d^{n-1}_i d^n_j = d^{n-1}_{j-1} d^n_i \text{ for } i < j, s^n_i s^{n-1}_j = s^n_{j+1} s^{n-1}_i \text{ if } i \le j,
\]
\[
d^n_i s^{n-1}_j = s^{n-2}_{j-1} d^{n-1}_i \text{ for } i < j, d^n_j s^{n-1}_j = id = d^n_{j+1} s^{n-1}_j, d^n_i s^{n-1}_j = s^{n-2}_j d^{n-1}_{i-1} \text{ for } i > j+1.
\]
This identities simply that the $d^n_i$ and $s^n_i$ behave as if they were the face and degeneracy maps of simplices. We also require that all compositions involved be strongly transversal, and will denote the above simplicial symplectic manifold simply by $P_\bullet$.
\begin{ex}\label{ex:triv-simp-symp}
For any symplectic manifold $P$, we have the trivial simplicial symplectic manifold
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{1.4};
\def\C{2.5};
\node at (0,0) {$\cdots$};
\node at (1.1,0) {$P$};
\node at (2.2,0) {$P$};
\node at (3.3,0) {$P$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
where all relations are $id$.
\end{ex}
\begin{ex}\label{ex:cot-simp-symp}
Let $G \rightrightarrows M$ be a Lie groupoid and form the simplicial nerve\footnote{We refer to \cite{Meh} for the definition of the simplicial nerve of a groupoid, and related simplicial constructions.}
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{2.6};
\def\C{3.7};
\node at (0,0) {$\cdots$};
\node at (1.7,0) {$G \times_M G$};
\node at (3.4,0) {$G$};
\node at (4.6,0) {$M.$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
The middle morphism $m: G \times_{M} G \to G$ is multiplication, the morphisms $G \rightrightarrows M$ are the source and target, and $e: M \to G$ is the identity embedding.
Applying $T^{*}$ gives the simplicial symplectic manifold
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{3.3};
\def\C{4.8};
\node at (0,0) {$\cdots$};
\node at (2.1,0) {$T^*(G \times_M G)$};
\node at (4.3,0) {$T^*G$};
\node at (5.9,0) {$T^*M.$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
Alternatively, we can view the relation $T^*m$ as a canonical relation $T^*G \times T^*G \to T^*G$, in which case we can form a simplicial symplectic manifold of the form
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{3.3};
\def\C{4.8};
\node at (0,0) {$\cdots$};
\node at (2.1,0) {$T^*G \times T^*G$};
\node at (4.3,0) {$T^*G$};
\node at (5.9,0) {$T^*M,$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
where the degree $n$ piece for $n > 1$ is $(T^*G)^n$. Now the first and third degree $2$ face morphisms $T^*G \times T^*G \to T^*G$ are the ``projections'' obtained by taking the cotangent lifts of the smooth relations
\[
(g,h) \mapsto g \text{ and } (g,h) \mapsto h, \text{ for $g,h \in G$ such that } r(g) = \ell(h).
\]
A simple calculation shows that these are respectively equal to the compositions
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (1) at (0,1) {$T^*G \times T^*G$};
\node (2) at (4,1) {$T^*G \times T^*M$};
\node (3) at (8.5,1) {$T^*G \times T^*G$};
\node (4) at (12,1) {$T^*G$};
\tikzset{font=\scriptsize};
\draw[->] (1) to node [above] {$id \times T^*\ell$} (2);
\draw[->] (2) to node [above] {$id \times (T^*r)^t$} (3);
\draw[->] (3) to node [above] {$(T^*\Delta)^t$} (4);
\end{tikzpicture}
\end{center}
and
\begin{center}
\begin{tikzpicture}[>=angle 90]
\node (1) at (0,1) {$T^*G \times T^*G$};
\node (2) at (4,1) {$T^*G \times T^*M$};
\node (3) at (8.5,1) {$T^*G \times T^*G$};
\node (4) at (12,1) {$T^*G.$};
\tikzset{font=\scriptsize};
\draw[->] (1) to node [above] {$T^*r \times id$} (2);
\draw[->] (2) to node [above] {$(T^*\ell)^t \times id$} (3);
\draw[->] (3) to node [above] {$(T^*\Delta)^t$} (4);
\end{tikzpicture}
\end{center}
The rest of the face and degeneracy maps can be similarly described.
\end{ex}
\begin{ex}\label{ex:bad-simp-symp}
Given a Lie groupoid $G$, we may also attempt to construct a simplicial symplectic manifold of the form
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{3.3};
\def\C{4.8};
\node at (0,0) {$\cdots$};
\node at (2.1,0) {$T^*G \times T^*G$};
\node at (4.3,0) {$T^*G$};
\node at (5.6,0) {$pt$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
Indeed, we simply consider the previous example in the case where $G \rightrightarrows pt$ is a Lie group and note that all the resulting structure morphisms make sense for more general groupoids.
However, some of these compositions turn out to not be strongly transversal (such as the composition of the degeneracy $pt \to T^*G$ with either face map $T^*G \to pt$), and so this is not an allowed simplicial object in our category. We will see in Chapter $5$ the problems that non-strongly transversal compositions may cause.
\end{ex}
\begin{ex}\label{ex:action-grpd}
For a Hamiltonian action of $G$ on $P$, form the corresponding action $\tau: T^{*}G \times P \to P$ in $\Symp$. Then we have a simplicial symplectic manifold
\begin{center}
\begin{tikzpicture}[>=angle 90]
\def\A{.3};
\def\B{4};
\def\C{6.3};
\node at (0,0) {$\cdots$};
\node at (2.4,0) {$T^*G \times T^*G \times P$};
\node at (5.4,0) {$T^*G \times P$};
\node at (7,0) {$P$};
\tikzset{font=\scriptsize};
\draw[->] (\A,+.18) -- (\A+.5,0+.18);
\draw[->] (\A,+.06) -- (\A+.5,+.06);
\draw[->] (\A,-.06) -- (\A+.5,-.06);
\draw[->] (\A,-.18) -- (\A+.5,0-.18);
\draw[->] (\B,+.12) -- (\B+.5,+.12);
\draw[->] (\B,0) -- (\B+.5,0);
\draw[->] (\B,-.12) -- (\B+.5,-.12);
\draw[->] (\C,0+.06) -- (\C+.5,0+.06);
\draw[->] (\C,0-.06) -- (\C+.5,0-.06);
\end{tikzpicture}
\end{center}
where the face maps in degree $2$ are
\[
id \times \tau,\ T^{*}m \times id,\ G \times id \times id,
\]
the face maps in degree $1$ are $\tau$ and $G \times id$, the degeneracy $P \to T^{*}G \times P$ is $\mathfrak g^{*} \times id$, and the other morphisms can be guessed from these.
\end{ex}
\begin{defn}
A \emph{$*$-structure} on a simplicial symplectic manifold $P_\bullet$ is a simplicial isomorphism
\[
I: P_\bullet \to (P_\bullet)^{op}
\]
such that $I^2 = id$, where $(P_\bullet)^{op}$ denotes the simplicial symplectic manifold obtained by reversing the order of the face and degeneracy maps in $P_\bullet$.
\end{defn}
The previous examples all admit such structures: for a trivial simplicial symplectic manifold it is the identity morphism, for the simplicial structures obtained from the cotangent bundle of a groupoid it is the simplicial morphism induced by the cotangent lifts of the maps
\[
(g_1,\ldots,g_n) \mapsto (g_n^{-1},\ldots,g_1^{-1}),
\]
and in the last example it is induced by a combination of the previous one and the morphism
\[
((g,-Ad_{g^{-1}}^*\theta + \mu(p)), p) \mapsto ((g^{-1},\xi), gp)
\]
in degree $2$ where $Ad^*$ denotes the coadjoint action of $G$ on $\g^*$.
\section{Towards Groupoids in $\Symp$}
Let us recall our original motivation: to understand the object $T^*G \rightrightarrows T^*M$ in $\Symp$ resulting from applying $T^*$ to a groupoid $G \rightrightarrows M$ in $\Man$. The notion of a \emph{groupoid object} in a category requires the existence of certain fiber products; indeed, the groupoid product is defined only as a map on a certain fiber product $G \times_M G$. The analogous construction in the symplectic category would require us to view $T^*(G \times_M G)$ as a fiber product of certain canonical relations.
As we have seen in the previous sections, making this precise is not in general possible. One way around this will be to work with the simplicial objects of the previous section---a point of view we consider briefly in the next chapter but more so in Chapter $5$.
However, there is another approach we can take: working with relations instead of maps, we can consider groupoid products $G \times_M G \to G$ as relations $G \times G \to G$, and hence we can consider the corresponding lift $T^*G \times T^*G \to T^*G$ as the ``groupoid product'' of $T^*G \rightrightarrows T^*M$. All commutative diagrams required in the definition of a groupoid object then hold, with one final caveat: these diagrams require the use of a ``diagonal'' morphism $T^*G \to T^*G \times T^*G$, which is extra data in the symplectic category (due to the lack of fiber products) as opposed to a canonical construction in $\Man$.
Taking this extra structure into account will lead us to the notion of a \emph{symplectic hopfoid} in the next chapter. As we will see, the compatibility between the ``product'' $T^*G \times T^*G \to T^*G$ and ``diagonal'' $T^*G \to T^*G \times T^*G$ required to have a ``groupoid-like'' structure is no accident: it reflects the fact that $T^*G$ is not simply a symplectic groupoid, but rather a \emph{symplectic double groupoid}. | {"config": "arxiv", "file": "1105.2592/chap2.tex"} |
TITLE: What determines radio frequency for a diode?
QUESTION [1 upvotes]: I have a very simple working radio as shown in this schematic:
This radio receives one radio station.
My question is, what determines the radio frequency that this simple radio is tuned in to?
REPLY [2 votes]: Years ago (1960s) when I worked for the BBC, part of my job was to answer calls from members of the public. One man rang to tell me that he could hear a BBC radio programme from his bed! I asked if he had a metal bedstead, which he did. He lived in Droitwich where the BBC Long Wave transmitting station was situated. [200kHz Light Programme in those days. Later the LW trnsmitter was used to broadcast Radio 4 on a slightly different frequency.] The contacts between different strands of the bedstead would have acted like the coherer of early radio receivers, which predated the quartz "cats whisker" diode and I presume there was enough received energy to make parts of the bedstead matrix vibrate. | {"set_name": "stack_exchange", "score": 1, "question_id": 301343} |
TITLE: Encoding sets of permutations with a generating set and a set of excluded elements
QUESTION [10 upvotes]: Polynomial-time algorithms are known for finding generating sets of permutation groups, which is interesting since we can then represent those groups succinctly without giving up on polynomial-time algorithms for answering many interesting questions related to these groups.
However, we may sometimes be interested in a set $R$ of permutations that does not form a group, so that set would be represented by $R=\langle S\rangle \setminus T$, where $\langle S\rangle$ is the group generated by a set $S$ of generators and $T$ is a set of permutations that are not in $R$, instead of just $\langle S\rangle$.
Has any work been done on computing such an encoding in the form of a pair $\{S,T\}$, possibly with the additional, natural goal of minimising $|S|+|T|$?
REPLY [1 votes]: If you are storing random permutations with probability ${1\over2}$ then you are going to need $log_{2}(n!)$ bits per permutation, Kolmogorov complexity dictates it.
If the distribution is non-random it depends.
To understand the state space it might help to look at http://oeis.org/A186202 , the size of any min dominating set over $S_{n}$ using a monogenic inclusion relation between permutations (ignoring the identity which is in all subgroups).
You can encode the relevant prime order permutations in $log_{2}( OEIS\_A186202(n) )$ bits each. That will give you some savings over the usual $log_{2}(n!)$ needed for a random permuation. | {"set_name": "stack_exchange", "score": 10, "question_id": 21298} |
TITLE: Influences on wavefunction path analysis
QUESTION [1 upvotes]: I was looking at simulations of a wave going through a slit. When the wavelength was much smaller than the slit width, the wave went through the slit and kept going straight like a laser beam. But when the wavelength was larger than the slit width, the wave spread in all directions when going through the slit. My question is, if a single moving neutron is ejected toward a single slit, and the neutron’s De Broglie wavelength is long compared to the slit width, then can the neutron go through the slit and take a direct right turn?
I ask this because the neutron has mass inertia, and therefore it seems odd for the neutron to take a trajectory path of going forward and then immediately turn right. Imagine you traveling a vehicle at some velocity, and then instantly turning right. The g-forces would be incredible. A wave, on the other hand, can wrap around corners. So I was thinking that the neutrons mass inertia plays a role in how sharp of an angle it can turn. I know mass inertia plays a role in the De Broglie wavelength in terms of momentum, but I'm not referring to that. I know the De Broglie wavelength changes relative to the neutron's velocity. I'm referring to the neutron’s ability or inability to take any path that a wave at the De Broglie wavelength can follow. Also I understand that the De Broglie wave is not considered to be a wave like sound such that it's made of atoms, gas, or known particles. I'm referring to a wavefunction analysis.
Another example of why I'm asking this question is the role gravity can play. For example if the slit experiment is turned sideways such that the neutron is traveling parallel to the planet. Gravity would change the trajectory path. Nothing changed in the experiment except for the addition of gravity. So it seemed to me that path probability analysis can require more than just wave analysis. I’d imagine static magnetic fields from a permanent magnet or electrically charged plates could play a role as well.
In short, I was wondering if the particle’s own inertia, gravity and other fields could significantly affect the probability path analysis. Maybe the proper way is to first do a wavefunction analysis, and then apply inertia forces, gravity, etc. Although I'm most interested in the first example of how inertia plays a role in the trajectory probability path, perhaps even preventing the particle from making too sharp of turns. If it makes any difference, I'm interested in how Many Worlds Interpretations handles this. Thanks!
REPLY [0 votes]: If I understand your question correctly, you are mainly concerned about the conservation of momentum. The inertia of the particle would not give any contradictions if the particle interacts with something else, which can then change the momentum of the particle. So when we consider a particle propagating (as a wave) through a slit and we end up detecting the particle at come crazy angle behind the screen, then clearly the particle must have underwent some change in momentum. Yes, indeed. If one went to a great deal of trouble, one may be able to measure to amount of momentum picked up by the screen that contains the slit and find that it matches the momentum that is missing. In other words, if the particle had to undergo a big change in momentum due to the diffraction by the slit, then the screen that contains the slit should pick up the deficit in momentum so that the sum of the momenta in the end equals the initial momentum of the particle.
Just to clear up one point, we need to remember that up until we actually detect the particle, it could be anywhere. All we know is that while it propagates through the slit is behave like a wave. The mass of the particle would affect how it propagates in that the wavefunction would a solution of a wave equation with a mass (Klein-Gordon or Dirac equation) rather than one without a mass (Helmholtz or Weyl equation). | {"set_name": "stack_exchange", "score": 1, "question_id": 276928} |
TITLE: Why does this mathematical series plateau below $\frac{1}{4}$ then have runaway growth?
QUESTION [1 upvotes]: This recursive mathematical series plateaus when $x \leq 1/4$, and it then goes runaway growth. Why?
x = 1/4
y = 0
for i in range(10000000):
y = (x + y)**2
REPLY [3 votes]: As a remark, you seem to implicitely only allow non-negative $x$ (setting say $x=-1000$ certainly leads to a runaway growth of $y$), so I'll keep that restriction.
So why is there no runaway growth for $0 \le x \le \frac14$?
As Matti.P wrote in a comment, your loop code would mathematically be described as a seqeunce: $y_0=0;\; y_{i+1}=(x+y_i)^2$ for all $i=0,1,2,\ldots$.
You start with $y_0 = 0 \le \frac14$.
Whenever you do the next loop iteration, and start with an $y_i$ that fulfills $0 \le y_i \le \frac14$, then the next $y_{i+1}$ fulfills the same condition!
Let's prove it. From
$$ 0 \le x+y_i \le \frac14 + \frac14 =\frac12,$$
which comes from using $0 \le x \le \frac14$ and $0 \le y_i \le \frac14$, we can immediately conclude, as the function $f(t)=t^2$ is increasing for $0 \le t$, that
$$ 0^2=0 \le y_{i+1}=(x+y_i)^2 \le \frac14 = \left(\frac12\right)^2.$$
So if $0 \le x \le \frac14$ and because we start with an $y$ value that is between $0$ and $\frac14$, all the following $y$-values will also stay in that interval.
To see why there is runaway growth for $x > \frac14$, we need a little bit more theory.
First, for all $x>0$ the sequence of $(y_i)$ will be increasing. It's true for the very first step from $y_0$ to $y_1$:
$$y_0=0; y_1=x^2 > 0.$$
And it keeps true from one step to the next:
if $0 \le y_i < y_{i+1}$, then
$$y_{i+1}=(x+y_i)^2 < (x+y_{i+1})^2 = y_{i+2},$$
again using the (strict) monotonicity of $f(t)=t^2$ for $t \ge 0$.
So $(y_n)$ is increasing for $x>0$. Such sequences can only have 2 behaviours:
they converge to a limit, or
they increase beyond all bounds and "converge" to $+\infty$.
We've seen above that for $0 \le x \le \frac14$ there is no increase beyond all bounds, so it must converge to a limit $l(x)$. How can this limit be calculated?
Well if we know (or assume) that $\lim_{i\to\infty}y_i=l(x)$, then we know that
$$l(x)=\lim_{i\to\infty}y_i = \lim_{i\to\infty}y_{i+1} = \lim_{i\to\infty} (x+y_i)^2 =(x+l(x))^2,$$
where the last equality used the fact that the function $g(t)=(x+t)^2$ is continous ($x$ is just a constant for that function). As we can see, this yields a quadratic equation for $l(x)$, which you can solve the usual way and get
$$l(x)=\frac{1-2x}2 \pm \sqrt{\frac{1-4x}4}$$
Which of the 2 values is actually the limit depends on $y_0$, but this is not our concern here. Note that the term under the square root is $\frac{1-4x}4$. That means the square root exists (in real numbers) only when $x \le \frac14$. If $x >\frac14$, the square root is complex and the only solutions to the equation are 2 non-real numbers.
But obviously, our sequence contains only real numbers, so the limit (if it exists) must be a real number. So the only conclusion we can draw is that for $x > \frac14$, the sequence $(y_n)$ has no limit.
But as I wrote above, for increasing sequences there are only 2 kinds of behaviours, and one has just been ruled out. So it has to be the other, which is unbounded growth. | {"set_name": "stack_exchange", "score": 1, "question_id": 3846008} |
TITLE: Rebasing half spin from $|u\rangle$ and $|d\rangle$ to $|+\rangle$ and $|-\rangle$
QUESTION [0 upvotes]: I want to know if my logic in rebasing the pure $|u\rangle$ and $|d\rangle$ to $|+\rangle$ and $|-\rangle$ is correct:
$$|+\rangle=\frac{|u\rangle+|d\rangle}{\sqrt{2}}\implies|u\rangle=\sqrt{2}|+\rangle-|d\rangle\tag1$$
$$|-\rangle=\frac{|u\rangle-|d\rangle}{\sqrt{2}}\implies|d\rangle=\sqrt{2}|-\rangle+|u\rangle\tag2$$
substituting $(2)$ into $(1)$ we get: $$|u\rangle=\frac{\sqrt{2}}{2}(|+\rangle+|-\rangle)$$
substituting $(1)$ into $(2)$ we get: $$|d\rangle=\frac{\sqrt{2}}{2}(|+\rangle-|-\rangle)$$
REPLY [2 votes]: This is a simple linear algebra problem. We can rephrase it in vector notation as such: take the first basis $|u\rangle, |d\rangle$ and consider a vector associated to it $\vec{u} = (u\quad d)^T$. Then do the same for the $|+\rangle, |-\rangle$ like this $\vec{v} = (+\quad -)^T$. Then the change of basis is just a linear transformation to which we associate a square matrix $$\vec{v} = A\vec{u}$$ The matrix $A$ can be easily found by the transformation you gave. When you have your matrix, then it is just a matter of inverting it, so that $$\vec{u} = A^{-1}\vec{v}$$
Obviously in the case of a system of 2 equations, this method is a bit too much. You can easily get the inverse transformation by substitution, as you did. But in general one can have many equations and by then, even just for three, inverting a matrix is just faster and easier.
The matrix is just $$A=\frac{1}{\sqrt{2}}\left(\begin{matrix} 1& 1\\ 1&-1 \end{matrix}\right)$$ which is really easy to invert. | {"set_name": "stack_exchange", "score": 0, "question_id": 659972} |
TITLE: Proof involving primes
QUESTION [2 upvotes]: Let n be a natural number. Prove that if 2^n -1 is prime, then n is prime.
I have considered proof by contradiction, as well as writing the converse and contrapositive of the statement out, but still cannot seem to come up with a proof.
Any help is appreciated
REPLY [2 votes]: Prove it by the contrapositive. Suppose that $n$ is not prime, that is, $n=pq$, where $p$ and $q$ are integers greater than $1$. We proceed by using the notable product
$$
a^k-1=(a-1)(a^{k-1}+a^{k-2}+\dots+a+1),\ \forall a\geq0,
$$
for any integer $k\geq 1$.
Note that
\begin{align}
2^n-1=(2^{p})^q-1&=(2^p-1)((2^p)^{q-1}+(2^p)^{q-2}+\dots+2^p+1).
\end{align}
Since $p,q>1$, it is easy to see that the integers $2^p-1$ and $(2^p)^{q-1}+(2^p)^{q-2}+\dots+2^p+1$ are greater than $1$. Therefore, $2^n-1$ is not a prime number. | {"set_name": "stack_exchange", "score": 2, "question_id": 2239079} |
\begin{document}
\title{On optimal weak algebraic manipulation detection codes
and weighted external difference families
\author{Minfeng Shao and Ying Miao}
\thanks{M. Shao and Y. Miao are with the Graduate School of Systems and Information Engineering,
University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8573, Japan
(e-mail: minfengshao@gmail.com, miao@sk.tsukuba.ac.jp).
}
}
\date{}
\maketitle
\vspace{0.1in}
\begin{abstract}
This paper provides a combinatorial characterization of weak algebraic manipulation detection (AMD)
codes via a kind of generalized external difference families called bounded
standard weighted external difference families (BSWEDFs).
By means of this characterization, we improve a known lower bound on the maximum
probability of successful tampering for the adversary's all possible strategies in weak AMD codes.
We clarify the relationship between weak AMD codes and BSWEDFs with various properties.
We also propose several explicit constructions for BSWEDFs, some of which can generate new optimal
weak AMD codes.
\end{abstract}
\begin{IEEEkeywords} Algebraic manipulation detection code, difference family, weighted external difference family.
\end{IEEEkeywords}
\section{Introduction}
Algebraic manipulation detection (AMD) codes were first introduced by
Cramer \textit{et al.} \cite{CDFPW} to convert linear secret sharing schemes into
robust secret sharing schemes and build nearly optimal robust fuzzy extractors.
For those cryptographic applications, AMD codes received much attention and
were further studied in \cite{AS,CFP,CPX}. Generally speaking, for AMD codes,
we consider two different settings: the
adversary has full knowledge of the source (the \textit{strong model}) and the adversary
has no knowledge about the source (the \textit{weak model}). In the viewpoint of combinatorics,
AMD codes were proved to be closely related with various kinds of external difference
families for both strong and weak models by Paterson and Stinson \cite{PS}. In the literature,
optimal AMD codes in the strong model and their corresponding generalized external
difference families received the most attention (see \cite{BJWZ,HP2018,JL,LNC,PS,MS,WYF,WYFF},
and the references therein), while relatively little was known about AMD codes
under the weak model.
In this paper, we focus on weak AMD codes. In \cite{PS},
Paterson and Stinson first derived a theoretic bound on the maximum probability
of successful tampering for weak AMD codes.
Very recently, Huczynska and Paterson \cite{HP} characterized
the optimal weak AMD codes with respect to the Paterson-Stinson bound
by weighted external difference families. Natural questions arise from
the Paterson-Stinson bound and the corresponding characterization are:
(i) Whether the Paterson-Stinson bound is always
tight; (ii) If not, what are the equivalent combinatorial structures for
those optimal weak AMD codes not having been characterized by the
characterization in \cite{HP}.
To answer these questions, in this paper, we further study
the relationship between weak AMD codes and weighted external
difference families. Firstly, we define a new type of weighted external
difference families which are proved equivalent with weak AMD codes.
By means of this combinatorial characterization of weak AMD codes:
(1) We improve the known lower bound on the maximum probability
of successful tampering for the adversary's all possible strategies; (2) We
derive a necessary condition for the Paterson-Stinson bound to
be achieved;
(3) We determine the exact combinatorial structure for a weak AMD code to be
optimal, when the Paterson-Stinson bound is not achievable.
In this way, some weak AMD codes which have not been identified to be
$R$-optimal previously now can be identified to be in fact $R$-optimal.
Secondly, we show the relationships between this new type of weighted external
difference families and other types of external difference families. Finally,
we exhibit several explicit constructions of optimal weighted external
difference families to generate optimal weak AMD codes.
This paper is organized as follows. In Section \ref{sec-preliminary}, we
introduces some preliminaries about AMD codes.
In Section \ref{sec-BSWEDF}, we investigate
the relationship between AMD codes and external
difference families. In Section \ref{sec-construction}, we describe several
explicit constructions for bounded standard weighted external difference families,
which are combinatorial equivalents of weak AMD codes.
Conclusion is drawn in Section \ref{sec-conclusion}.
\section{Preliminaries}\label{sec-preliminary}
In this section we describe some notation and definitions about AMD codes.
\begin{itemize}
\item Let $(G,+)$ be an Abelian group of order $n$ with identity $0$;
\item For a positive integer $n$, let $\mathbb{Z}_{n}$ be the residue class ring of integers modulo $n$;
\item For a multi-set $B$ and a positive integer $k$, let $k\boxtimes B$ denote the multi-set, where each
element of $B$ repeated $k$ times;
\item For a subset $B\subseteq G$, ${D(B)}$ denotes the multi-set
$\{a-b\in G: a, b\in B,\,a\ne b\}$;
\item For subsets $B_1,B_2\subseteq G$, $D(B_1,B_2)$ denotes the multi-set
$\{a-b\in G: a\in B_1, b\in B_2\}$;
\item For a multi-set $B$, let $\sharp(a,B)$ denote the number of times that $a$ appears in $B$;
\item For positive integers $k_1,k_2,\dots,k_m$, let $\text{lcm}(k_1,k_2,\dots,k_m)$ denote the least common multiple of
$k_1,k_2,\dots,k_m$.
\end{itemize}
Let $S$ be the source space, i.e., the set of plaintext messages with size $m$, and
$G$ be the encoded message space. An encoding function $E$
maps $s\in S$ to some $g\in G$.
Let $A_s\subseteq G$ denote the set of valid encodings of $s\in S$, where
$A_s\cap A_{s'}=\emptyset$ is required for any $s\ne s'$ so that any message
$g\in A_s$ can be correctly decoded as $D(g)=s$. Denote
$\mathcal{A}\triangleq\{A_s\,:\,s\in S\}$.
\begin{definition}[\cite{PS}]
For given $(S,G,\cA,E)$, let
\begin{itemize}
\item {The value $\Delta\in G\backslash \{0\}$ be chosen according to the adversary's strategy $\sigma$;}
\item {The source message $s\in S$ be chosen uniformly at random by the encoder, i.e., we
assume equiprobable sources;}
\item {The message $s$ be encoded into $g\in A_s$ using the encoding function $E$; }
\item {The adversary wins (a successful tampering) if and only if $g+\Delta\in A_{s'}$ with $s'\ne s$.}
\end{itemize}
The probability of successful tampering is denoted by $\rho_\sigma$ for strategy $\sigma$ of the adversary.
The code $(S,G,\mathcal{A},E)$ is called an $(n,m,a,\rho)$ \textit{algebraic manipulation detection code}
(or an $(n,m,a,\rho)$-AMD code for short) under the weak model,
where $a=\sum_{s\in S}|A_s|$ and $\rho$ denotes the maximum probability of successful tampering for all
possible strategies, i.e.,
\begin{equation*}
\rho=\max_{\sigma} \rho_{\sigma}.
\end{equation*}
Specially, if $E$ encodes $s$ to an element of $A_s$ uniformly, i.e.,
$Pr(E(s)=g)=\frac{1}{|A_s|}$ for any $s\in S$ and $g\in A_s$, then we
use $(S,G,\mathcal{A},E_u)$ to distinguish this kind of AMD codes under
the weak model, which were also termed as weak AMD codes in \cite{HP}.
\end{definition}
For weak AMD codes, the following Paterson-Stinson bound was derived in \cite{PS}.
\begin{lemma}[\cite{PS}]\label{lemma_R_optimal}
For any weak $(n,m,a,\rho)$-AMD code, the probability $\rho$ satisfies
\begin{equation*}
\rho\geq \frac{a(m-1)}{m(n-1)}.
\end{equation*}
\end{lemma}
\begin{definition}[\cite{PS}]\label{def_R_op_PS}
A weak AMD code that meets the bound of Lemma \ref{lemma_R_optimal} with
equality is said to be $R$\textit{-optimal} with respect to the bound in
Lemma \ref{lemma_R_optimal}, where $R$ is used to indicate that random
choosing $\Delta$ is an optimal strategy for the adversary.
\end{definition}
\section{Algebraic manipulation detection codes and external difference families}\label{sec-BSWEDF}
In this section, we study the relationship between algebraic manipulation
detection codes and external difference families. Before doing this, we first introduce
some notation and definitions about difference families and their generalizations.
\begin{definition}[\cite{CD}]
Let ${\cB} = \{B_i: 1\le i \le m\}$ be a family of subsets of $G$.
Then ${\cB}$ is called a {\it difference family} (DF) if each nonzero element of $G$
appears exactly $\lambda$ times in the multi-set ${\bigcup}_{1\leq i\leq m}D(B_i)$.
Let $K=(|B_1|,|B_2|,\dots,|B_m|)$. One briefly says that ${\cB}$ is an $(n,K,\lambda)$-DF.
\end{definition}
When $m=1$ the set $B_1$ is also called an $(n,k=|B_1|,\lambda)$ \textit{difference set}.
If $\cB$ forms a partition of $G$,
then ${\cB}$ is called a {\it partitioned difference family} (PDF) \cite{D2009}
and denoted as an $(n,K,\lambda)$-PDF.
\begin{definition}[\cite{PS}]\label{def_nonuniform}
Let ${\cB} = \{B_i: 1\le i \le m\}$ be a family of disjoint subsets of $G$.
Then ${\cB}$ forms an {\it external difference family} (EDF) if each nonzero element of $G$
appears exactly $\lambda$ times in the union of multi-sets $D(B_i,B_j)$ for
$1\leq i\ne j\leq m$,
i.e.,
\begin{equation*}
\bigcup_{1\leq i\ne j\leq m}D(B_i,B_j)=\lambda \boxtimes (G\backslash \{0\}).
\end{equation*}
We briefly denote ${\cB}$ as an $(n,m,K,\lambda)$-EDF, where $K=(|B_1|,|B_2|,\dots,|B_m|)$.
An EDF is \textit{regular} if $|B_1|=|B_2|=\dots=|B_m|=k$, denoted as an $(n,m,k,\lambda)$-EDF,
which is also named as a perfect difference system of sets (refer to \cite{L,FT,FG} for instances).
\end{definition}
\begin{definition}[\cite{PS}]\label{def_nonuniform}
Let ${\cB} = \{B_i: 1\le i \le m\}$ be a family of disjoint subsets of $G$.
Then ${\cB}$ is a {\it bounded external difference family} (BEDF) if each nonzero element of $G$
appears at most $\lambda$ times in the union of multi-sets $D(B_i,B_j)$ for
$1\leq i\ne j\leq m$, i.e.,
\begin{equation*}
\bigcup_{1\leq i\ne j\leq m}D(B_i,B_j)\subseteq\lambda \boxtimes (G\backslash \{0\}).
\end{equation*}
We briefly denote ${\cB}$ as an $(n,m,K,\lambda)$-BEDF, where $K=(|B_1|,|B_2|,\dots,|B_m|)$.
\end{definition}
To construct AMD codes, in \cite{PS}, the following generalizations
of EDF were also introduced.
\begin{definition}[\cite{PS}]\label{def_nonuniform_GSEDF}
Let ${\cB} = \{B_i: 1\le i \le m\}$ be a family of disjoint subsets of $G$.
${\cB}$ is called an $(n,m;k_1,k_2,\cdots,k_m; $ $\lambda_1,\lambda_2,\cdots,\lambda_m)$-{\it
generalized strong external difference family} (GSEDF) if for any given $1\leq i\leq m$,
each nonzero element of $G$ appears exactly $\lambda_i$ times in the union
of multi-sets $D(B_i,B_j)$ for $1\leq j\ne i\leq m$, i.e.,
\begin{equation}\label{eqn_GSEDF}
\bigcup_{\{j:1\leq j\leq m,\,j\ne i\}}D(B_i,B_j)=\lambda_i \boxtimes (G\backslash \{0\}),
\end{equation}
where $k_i=|B_i|$ for $1\leq i\leq m$.
\end{definition}
\begin{definition}[\cite{PS}]\label{def_nonuniform_GSEDF}
Let ${\cB} = \{B_i: 1\le i \le m\}$ be a family of disjoint subsets of $G$.
Then ${\cB}$ forms an $(n,m;k_1,k_2,\cdots,k_m;$ $\lambda_1,\lambda_2,\cdots,\lambda_m)$-{\it
bounded generalized strong external difference family} (BGSEDF) if for any given $1\leq i\leq m$,
each nonzero element of $G$ appears at most $\lambda_i$ times in the union
of multi-sets $D(B_i,B_j)$ for $1\leq j\ne i\leq m$, i.e.,
\begin{equation}\label{eqn_BGSEDF}
\bigcup_{\{j:1\leq j\leq m,\,j\ne i\}}D(B_i,B_j)\subseteq\lambda_i \boxtimes (G\backslash \{0\}),
\end{equation}
where $k_i=|B_i|$ for $1\leq i\leq m$.
\end{definition}
\begin{definition}[\cite{PS}]\label{def_nonuniform_PEDF}
Let ${\cB} = \{B_i: 1\le i \le m\}$ be a family of disjoint subsets of $G$.
Then ${\cB}$ is an $(n,m;c_1,c_2,\cdots,c_l;w_1,$ $w_2,\cdots,w_l;\lambda_1,\lambda_2,
\cdots,\lambda_l)$-{\it partitioned external difference family} (PEDF) if for any given $1\leq t\leq l$,
\begin{equation}\label{eqn_PEDF}
\bigcup_{\{i\,:\,|B_i|=w_t\}}\bigcup_{\{j:1\leq j\leq m,\,j\ne i\}}D(B_i,B_j)=\lambda_t \boxtimes (G\backslash \{0\}),
\end{equation}
where $c_t=|\{i\,:\,|B_i|=w_t,\, 1\leq i\leq m\}|$ for $1\leq t\leq l$.
\end{definition}
To characterize weak AMD codes, we further generalize external difference families
to weighted external differences families.
\begin{definition}\label{def_BSWEDF}
Let ${\cB} = \{B_i: 1 \le i \le m\}$ be a family of disjoint subsets of $G$.
Let $K=(k_1,k_2,\dots,k_m)$ with $k_i=|B_i|$ for $1\leq i\leq m$ and
$\widetilde{k}=\text{lcm}(k_1,k_2,\cdots,k_m)$. Define
$\widetilde{\cB}=\{\widetilde{B}_i: B_i\in \cB\}$ as the {\it standard
weighted multi-sets} of $\cB$, where
\begin{equation*}
\widetilde{B}_i\triangleq\frac{\widetilde{k}}{|B_i|}\boxtimes B_i=\frac{\widetilde{k}}{k_i}\boxtimes B_i.
\end{equation*}
Then ${\cB}$ is called an $(n,m,K,a,\lambda)$-{\it bounded standard weighted
external difference family} (BSWEDF) if $\lambda$
is the smallest positive integer such that
\begin{equation*}
\bigcup\limits_{1\leq i\ne j\leq m}D(B_i,\widetilde{B}_j)\subseteq \lambda \boxtimes (G\backslash \{0\}),
\end{equation*}
where $a=\sum_{1\leq i\leq m}k_i$. Furthermore, if
$\cB $ satisfies
\begin{equation*}
\bigcup\limits_{1\leq i\ne j\leq m}D(B_i,\widetilde{B}_j)=\lambda \boxtimes (G\backslash \{0\}),
\end{equation*}
then it is named as a {\it standard weighted external difference family}, also denoted as
an $(n,m,K,a,\lambda)$-SWEDF for short.
\end{definition}
For BSWEDFs and SWEDFs, we have the following facts on their parameters.
\begin{lemma}\label{lemma_bound_BSWEDF}
Let $\cB$ be an $(n,m,K,a,\lambda)$-BSWEDF. Then we have
\begin{equation}\label{eqn_Bound_lambda}
\lambda\geq \left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil.
\end{equation}
Specially, if $\cB$ is an $(n,m,K,a,\lambda)$-SWEDF, then
$(n-1)\mid (\widetilde{k}a(m-1))$ and
\begin{equation*}
\lambda=\frac{\widetilde{k}a(m-1)}{n-1}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\cB=\{B_i\,:\,1\leq i\leq m\}$.
The fact
\begin{equation*}
\bigcup\limits_{1\leq i\ne j\leq m}D(B_i,\widetilde{B}_j)=\bigcup\limits_{1\leq i\ne j\leq m}\bigcup_{b\in B_i}D(\{b\},\widetilde{B}_j)
\end{equation*}
means that
\begin{equation}\label{eqn_total_dif}
\left|\bigcup\limits_{1\leq i\ne j\leq m}D(B_i,\widetilde{B}_j)\right|=\sum\limits_{1\leq i\leq m}\sum\limits_{1\leq j\leq m\atop j\ne i}\sum_{b\in B_i}|D(\{b\},\widetilde{B}_j)|
=\sum\limits_{1\leq i\leq m}\sum\limits_{1\leq j\leq m\atop j\ne i}\sum_{b\in B_i}\widetilde{k}=\widetilde{k}a(m-1).
\end{equation}
Thus, we have
$\lambda \geq \lceil\frac{\widetilde{k}a(m-1)}{n-1}\rceil$.
Similarly, for the case of SWEDFs,
by Definition \ref{def_BSWEDF} and \eqref{eqn_total_dif}, we have $\lambda(n-1)=\widetilde{k}a(m-1)$,
i.e., $\lambda=\frac{\widetilde{k}a(m-1)}{n-1}$, which also means $(n-1)\mid(\widetilde{k}a(m-1))$.
\end{proof}
\begin{definition}
An $(n,m,K,a,\lambda)$-BSWEDF is said to be \textit{optimal}
if $\lambda$ takes the smallest possible value for
given $n$, $m$, and $K$.
\end{definition}
Specially, an $(n,m,K,a,\lambda)$-BSWEDF
is optimal if $\lambda$ achieves the lower bound given by
\eqref{eqn_Bound_lambda} with equality, i.e.,
$\lambda=\lceil\frac{\widetilde{k}a(m-1)}{n-1}\rceil$.
For $\Delta\in G\backslash \{0\}$, let $\rho_\Delta$ denote
the probability that the adversary wins by modifying $g\in A_s$ into $g+\Delta\in A_{s'}$ for some $s'\neq s$. Thus, we have
$\rho=\max\{\rho_\Delta:\Delta\in G\backslash \{0\}\}$.
\begin{theorem}\label{theorem_AMD_BSWEDF}
There exists a weak $(n,m,a,\rho)$-AMD code $(S,G,\cA,E_u)$ if and only
if there exists
an $(n,m,K,a,\lambda)$-BSWEDF, where $|G|=n$, $a=\sum_{1\leq i\leq m}|A_{s_i}|$,
$K=(|A_{s_1}|,|A_{s_2}|,\cdots, |A_{s_m}|)$, $s_i\in S$, and
$\rho=\frac{\lambda}{\widetilde{k}m}$.
\end{theorem}
\begin{proof}
If $(S,G,\cA,E_u)$ is a weak $(n,m,a,\rho)$-AMD code, then for any
$\Delta\in G\backslash\{0\}$, we have
\begin{equation*}
\rho_\Delta\leq \rho=\frac{\lambda}{\widetilde{k}m},
\end{equation*}
that is,
\begin{equation}\label{eqn_rho_delta_stand}
\begin{split}
\frac{\lambda}{\widetilde{k}m}\geq \rho_\Delta
=&\sum_{s\in S}Pr(s)\sum_{g\in A_s}Pr(E_u(s)=g)\left(\sum_{s'\ne s,s'\in S}Pr(g+\Delta\in A_{s'})\right)\\
=&\sum_{s\in S}\frac{1}{m}\sum_{g\in A_s}\frac{1}{|A_s|}\left(\sum_{s'\ne s,s'\in S}Pr(g+\Delta\in A_{s'})\right)\\
=&\sum_{s\in S}\frac{1}{m}\frac{1}{|A_s|}\left(\sum_{s'\ne s,s'\in S}\sum_{g\in A_s}Pr(g+\Delta\in A_{s'})\right),
\end{split}
\end{equation}
where the second equality holds by the fact that $E_u$ encodes $s$ to elements of $A_s$ with uniform probability.
Note that for given $\Delta$, $s$, $g\in A_s$ and $s'\ne s$,
\begin{equation*}
Pr(g+\Delta\in A_{s'})=
\begin{cases}
1,\,\,&\Delta\in D(A_{s'},\{g\}),\\
0,\,\,&\Delta\not\in D(A_{s'},\{g\}).\\
\end{cases}
\end{equation*}
Thus, Inequality \eqref{eqn_rho_delta_stand} implies that
\begin{equation}\label{eqn_rho_delta}
\begin{split}
\frac{\lambda}{m}\geq\widetilde{k}\rho_\Delta=&\sum_{s\in S}\frac{1}{m}\frac{\widetilde{k}}{|A_s|}\left(\sum_{s'\ne s,s'\in S}\sum_{g\in A_s}Pr(g+\Delta\in A_{s'})\right)\\
=&\sum_{s\in S}\frac{1}{m}\frac{\widetilde{k}}{|A_s|}\left(\sum_{s'\ne s,s'\in S}\sharp\left(\Delta, D(A_{s'},A_s)\right)\right)\\
=&\sum_{s\in S}\frac{1}{m}\left(\sum_{s'\ne s,s'\in S}\frac{\widetilde{k}}{|A_s|}\sharp\left(\Delta, D(A_{s'},A_s)\right)\right)\\
=&\sum_{s\in S}\frac{1}{m}\left(\sum_{s'\ne s,s'\in S}\sharp\left(\Delta, D(A_{s'},\widetilde{A}_s)\right)\right)\\
=&\frac{1}{m}\sharp\left(\Delta, \bigcup_{s,s'\in S,\atop s'\ne s} D(A_{s'},\widetilde{A}_s)\right),\\
\end{split}
\end{equation}
where $\sharp(\Delta, B)$ denotes the number of times that $\Delta$ appears in the multi-set $B$.
This means that any $\Delta\in G\backslash \{0\}$ appears at most
$\lambda$ times in the multi-set
$\bigcup_{s,s'\in S,\atop s'\ne s} D(A_{s'},\widetilde{A}_s)$,
i.e., $$\bigcup_{s,s'\in S,\atop s'\ne s} D(A_{s'},\widetilde{A}_s)\subseteq \lambda\boxtimes(G\backslash\{0\}).$$
Note that $\rho=\max\{\rho_\Delta\,:\,\Delta\in G\backslash\{0\}\}$ means
there exists at least one $\Delta\in G\backslash \{0\}$ such that the equality
in \eqref{eqn_rho_delta} holds.
Then $\{A_s\,:\,s\in S\}$ forms an $(n,m,(|A_{s_1}|,|A_{s_2}|,\cdots,|A_{s_m}|),a,\lambda)$-BSWEDF
by Definition \ref{def_BSWEDF}.
Conversely, suppose that there exists an $(n,m,K,a,\lambda)$-BSWEDF $\cB=\{B_i\,:\,1\leq i\leq m\}$ over $G$. Let
$S=\{s_i\,:\,1\leq i\leq m\}$ and $A_{s_i}=B_i$ for $1\leq i\leq m$. Then we can define
a weak AMD code, where $E_u(s_i)=g\in B_i$ with equiprobability. For any $\Delta\in G\backslash\{0\}$,
similarly as \eqref{eqn_rho_delta_stand}, we have
\begin{equation*}
\begin{split}
\rho_\Delta=&\sum_{s\in S}\frac{1}{m}\frac{1}{|A_s|}\left(\sum_{s'\ne s,s'\in S}\sum_{g\in A_s}Pr(g+\Delta\in A_{s'})\right)\\
=&\sum_{1\leq i\leq m}\frac{1}{m}\frac{1}{|B_i|}\left(\sum_{1\leq j\leq m\atop j\ne i}\sharp(\Delta,D(B_j,B_i))\right)\\
=&\sum_{1\leq i\leq m}\frac{1}{\widetilde{k}m}\left(\sum_{1\leq j\leq m\atop j\ne i}\sharp(\Delta,D(B_j,\widetilde{B}_i))\right)\\
=&\frac{1}{\widetilde{k}m}\left(\sum_{1\leq j\ne i\leq m}\sharp(\Delta,D(B_j,\widetilde{B}_i))\right)\\
\leq &\frac{\lambda}{\widetilde{k}m},
\end{split}
\end{equation*}
where the last inequality holds by the fact that $\cB$ is an $(n,m,K,a,\lambda)$-BSWEDF.
According to Definition \ref{def_BSWEDF}, the equality is achieved for at least one
$\Delta\in G\backslash\{0\}$ in the preceding inequality.
Thus, the weak $(n,m,a,\rho)$-AMD code
defined based on the BSWEDF $\cB$ satisfies $$\rho=\max\{\rho_\Delta\,:\, \Delta\in
G\backslash\{0\}\}= \frac{\lambda}{\widetilde{k}m},$$ which completes the proof.
\end{proof}
When we consider the optimality of BSWEDF, the size-distribution $K=(k_1, k_2,\dots,k_m)$
is given. However, the $R$-optimality of weak AMD codes only relates with $a=\sum_{1\leq i\leq m}k_i$
as defined in \cite{PS} but
disregards the exact size-distribution $K$ of $\cA$.
There may exist several BSWEDFs with different $K$ which correspond to weak AMD codes
with exactly the same parameter $a$. Thus, although the BSWEDF gives a characterization of the weak AMD code,
in general, the optimal BSWEDF for a given $K$ does not necessarily correspond to an $R$-optimal
weak AMD code for a given $a$.
\begin{definition}\label{def_strong_optimal}
For given $n$, $m$ and $a$, an $(n,m,K,a,\lambda)$-BSWEDF is said to be \textit{strongly optimal}
if $\frac{\lambda}{\widetilde{k}m}=\rho_{(n,m,a)}$, where
\begin{equation}\label{eqn_rh_nma}
\rho_{(n,m,a)}=\min_{K'}\left\{\frac{\lambda'}{\widetilde{k'}m}\,:\,\exists\, (n,m,K',a,\lambda')\text{-BSWEDF}\, s.t.\, \sum_{1\leq i\leq m}k'_i=a\right\}.
\end{equation}
\end{definition}
By Theorem \ref{theorem_AMD_BSWEDF} and Lemma \ref{lemma_bound_BSWEDF}, we have
\begin{corollary}\label{corollary_improved_bound}
For any weak $(n,m,a,\rho)$-AMD code $(S,G,\mathcal{A},E_u)$, we have
\begin{equation*}
\rho\geq \rho_{(n,m,a)}\geq \min_{K}\left\{\left\lceil\frac{\widetilde{k}a(m-1)}{n-1}
\right\rceil\frac{1}{\widetilde{k}m}\,:\,\sum_{1\leq i\leq m}k_i=a\right\},
\end{equation*}
where $|A_i|=k_i$ for any $A_i \in \cA$.
\end{corollary}
\begin{proof}
Let $(S,G,\mathcal{A},E_u)$ be a weak $(n,m,a,\rho)$-AMD code.
By Theorem \ref{theorem_AMD_BSWEDF}, there exists an $(n,m,K,a,\lambda)$-BSWEDF
with $\lambda= \widetilde{k}m\rho$. Then by Lemma \ref{lemma_bound_BSWEDF} and
\eqref{eqn_rh_nma},
$$\rho=\frac{\lambda}{\widetilde{k}m}\geq \rho_{(n,m,a)}\geq\min_{K}\left\{\left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil
\frac{1}{\widetilde{k}m}\,:\,\sum_{1\leq i\leq m}k_i=a\right\}.$$
\end{proof}
\begin{definition}
A weak AMD code with $\rho=\rho_{(n,m,a)}$ is said to be $R$\textit{-optimal} with respect to the bound in
Corollary \ref{corollary_improved_bound}.
\end{definition}
When $(n-1)\mid (\widetilde{k}a(m-1))$,
the bound in Corollary \ref{corollary_improved_bound} is exactly the same as the
one given in Lemma \ref{lemma_R_optimal}.
However, when $(n-1) \nmid (\widetilde{k}a(m-1))$, our bound
in Corollary \ref{corollary_improved_bound} can improve
the known one in Lemma \ref{lemma_R_optimal}.
The following is an easy example.
\begin{corollary}
For any weak $(n,m,a,\rho)$-AMD code $(S,G,\mathcal{A},E_u)$, if $n-1$ is a prime and $a<n-1$, then we have
\begin{equation*}
\rho\geq \min_{K}\left\{\left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil
\frac{1}{\widetilde{k}m}\,:\,\sum_{1\leq i\leq m}k_i=a\right\}>\frac{a(m-1)}{m(n-1)}.
\end{equation*}
\end{corollary}
\begin{proof}
The lemma follows from the facts that $k_i\leq a<n-1$ for $1\leq i\leq m$, $m\leq a<n-1$, and $n-1$ is a prime.
In this case, $(n-1)\nmid(\widetilde{k}a(m-1))$.
\end{proof}
A more concrete example is listed below.
\begin{example}
Let $n=10$, $m=3$, and $a=5$. Let $B=\{\{5\},\{2\},\{0,4,6\}\}$ be a family of disjoint subsets of $\Z_{10}$,
which corresponding to a weak $(10,3,5,\rho)$-AMD code, where $\rho=\frac{1}{3}\cdot\frac{1}{1}\cdot{1}+\frac{1}{3}\cdot\frac{1}{1}\cdot{0}+\frac{1}{3}\cdot\frac{1}{3}\cdot{1}=\frac{4}{9}$.
According to Lemma \ref {lemma_R_optimal} and definition \ref{def_R_op_PS}, this is not an $R$-optimal weak AMD code.
However, $R$-optimality should mean that random choosing $\Delta$ is an optimal strategy for the adversary.
Clearly, according to Corollary \ref{corollary_improved_bound}, the parameter $\rho$ cannot be smaller then
\begin{equation*}
\begin{split}
&\min_{K}\left\{\left\lceil\frac{\widetilde{k}5(3-1)}{10-1}
\right\rceil\frac{1}{3\widetilde{k}}\,:\,\sum_{1\leq i\leq 3}k_i=5\right\}\\
=&\min\left\{\left\lceil\frac{\lcm (1,1,3)\cdot5\cdot2}{9}\right\rceil\frac{1}{3\lcm(1,1,3)},\left\lceil\frac{\lcm(1,2,2)\cdot5\cdot2}{9}\right\rceil\frac{1}{3\lcm(1,2,2)}\right\}\\
=&\min\left\{\frac{4}{9},\frac{1}{2}\right\}=\frac{4}{9}.\\
\end{split}
\end{equation*}
Therefore, this example should be an $R$-optimal weak $(10,3,5,\rho)$-AMD code.
This trouble is due to the fact that the known bound in Lemma \ref {lemma_R_optimal} is not always tight.
\end{example}
Relationships between optimal weak AMD codes and optimal BSWEDFs are described below.
\begin{corollary}\label{corollary_char}
Let $n$ and $m$ be positive integers.
\begin{itemize}
\item[(I)] {For given $K=(k_1,k_2,\dots,k_m)$, let $\rho_{(n,m,K)}$ denote the
the smallest possible $\rho$ for weak $(n,m,\sum_{1\leq i\leq m}k_i,\rho)$-AMD codes.
Then a weak $(n,m,a,\rho)$-AMD
code $(S,G,\cA,E_u)$ has the smallest $\rho$, i.e., $\rho=\rho_{(n,m,K)}$ if and only if its corresponding BSWEDF with parameters
$(n,m,K,a,\lambda=\widetilde{k}m\rho)$ is optimal, where
$S=\{s_i\,:\,1\leq i\leq m\}$, $\cA=\{A_{s_i}\,:\,1\leq i\leq m\}$, $k_i=|A_{s_i}|$ for $1\leq i\leq m$, $K=(k_1,k_2,\dots,k_m)$, and $a=\sum_{1\leq i\leq m}k_i$.}
\item[(II)] {For given $a$, there exists an $R$-optimal weak $(n,m,a,\rho)$-AMD
code $(S,G,\cA,E_u)$ with respect to the bound in Corollary \ref{corollary_improved_bound}
if and only if there exists
a strongly optimal $(n,m,K,a,\lambda)$-BSWEDF, where $|G|=n$, $a=\sum_{s\in S}|A_s|$,
$\rho=\rho_{(n,m,a)}$,
and $\lambda=\widetilde{k}m\rho_{(n,m,a)}$.}
\item[(III)]{There exists an $R$-optimal weak $(n,m,a,\rho)$-AMD code $(S,G,\cA,E_u)$ with respect to
the bound in Lemma \ref{lemma_R_optimal} if and only
if there exists
an $(n,m,K,a,\lambda)$-SWEDF, where $\rho=\frac{a(m-1)}{m(n-1)}$,
and $\lambda=\frac{\widetilde{k}a(m-1)}{n-1}$.}
\end{itemize}
\end{corollary}
\begin{proof}
By Theorem \ref{theorem_AMD_BSWEDF}, for given $n$, $m$, $K$ (or $a$, resp.), a
weak AMD code with the smallest $\rho$ is equivalent to a
BSWEDF with the smallest $\lambda$, i.e., an optimal (or strongly optimal, resp.) BSWEDF.
The third part of the result follows directly
from Theorem \ref{theorem_AMD_BSWEDF} and Lemma \ref{lemma_bound_BSWEDF}.
\end{proof}
\begin{example}
Let $n=10$, $m=3$, and $a=5$. Let $\cB^{(1)}=\{B^{(1)}_1=\{5\},B^{(1)}_2=\{4,6\},B^{(1)}_3=\{2,8\}\}$ and
$\cB^{(2)}=\{B^{(2)}_1=\{5\},B^{(2)}_2=\{2\},B^{(2)}_3=\{0,4,6\}\}$ be two families of disjoint subsets
of $\Z_{10}$. It is easy to verify that
\begin{equation*}
\bigcup_{1\leq i\leq 3}D\left(B^{(1)}_i,\widetilde{B}^{(1)}_j\right)\subseteq 3\boxtimes (\Z_{10}\backslash\{0\})
\end{equation*}
and
\begin{equation*}
\bigcup_{1\leq i\leq 3}D\left(B^{(2)}_i,\widetilde{B}^{(2)}_j\right)\subseteq 4\boxtimes (\Z_{10}\backslash\{0\}).
\end{equation*}
According to Lemma \ref{lemma_bound_BSWEDF}, $\cB^{(1)}$ is an optimal $(10,3,(1,2,2),5,3)$-BSWEDF and
$\cB^{(2)}$ is an optimal $(10,3,(1,1,3),5,4)$-BSWEDF. By Corollary \ref{corollary_improved_bound},
$$\rho_{(10,3,5)}\geq \min_{K}\left\{\left\lceil\frac{\widetilde{k}5(3-1)}{10-1}
\right\rceil\frac{1}{3\widetilde{k}}\,:\,\sum_{1\leq i\leq 3}k_i=5\right\}=\frac{4}{9}.$$
Thus, by Definition \ref{def_strong_optimal}, $\cB^{(2)}$ is in fact not only an optimal, but a strongly optimal BSWEDF.
By Corollary \ref{corollary_char}. (II), we can obtain a corresponding $R$-optimal weak AMD code with respect
to the bound in Corollary \ref{corollary_improved_bound} from $\cB^{(2)}$.
\end{example}
Although the weak $(n,m,a,\rho_{(n,m,K)}=\frac{\lambda}{\widetilde{k}m})$-AMD
code $(S,G,\cA,E_u)$ based on an optimal $(n,m,K,a,\lambda)$-BSWEDF
may sometimes not correspond to an $R$-optimal weak AMD code with parameters
$(n,m,a,\rho_{(n,m,a)})$, the difference $\rho_{(n,m,K)}-\rho_{(n,m,a)}$ is not big.
\begin{lemma}
Let $a=\sum_{A\in \cA}|A|=\sum_{1\leq i\leq m}k_i$.
Let $(S,G,\cA,E_u)$ be the weak $(n,m,a,\rho=\frac{\lambda}{\widetilde{k}m})$-AMD
code based on an optimal
$(n,m,K,a,\lambda)$-BSWEDF with $\lambda=\lceil\frac{\widetilde{k}a(m-1)}{n-1}\rceil$,
and let $(S,G,\cA',E_u)$ be the $R$-optimal weak $(n,m,a,\rho_{(n,m,a)})$-AMD
code with respect to the bound in
Corollary \ref{corollary_improved_bound}. Then we have
\begin{equation*}
0 \leq\rho_{(n,m,K)}-\rho_{(n,m,a)}\leq \frac{1}{\widetilde{k}m}.
\end{equation*}
\end{lemma}
\begin{proof}
The lemma follows directly from the fact that
$$0 \leq\rho_{(n,m,K)}-\rho_{(n,m,a)}= \left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil\frac{1}{\widetilde{k}m}-\rho_{(n,m,a)}\leq \left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil\frac{1}{\widetilde{k}m}-\frac{a(m-1)}{m(n-1)}\leq \frac{1}{\widetilde{k}m}.$$
\end{proof}
In \cite{HP}, Huczynska and Paterson characterized $R$-optimal AMD codes
$(S,G,\cA,E_u)$ by reciprocally-weighted external difference families,
which can be defined as follows.
\begin{definition}[\cite{HP}]\label{def_RWEDF}
Let ${\cB} = \{B_i: 1 \le i \le m\}$ be a family of subsets of $G$.
Let $K=(k_1,k_2,\cdots,k_m)$ with $k_i=|B_i|$ for $1\leq i\leq m$ and
$\widetilde{k}=\text{lcm}(k_1,k_2,\cdots,k_m)$. Then $\cB$
is said to be an $(n,m,(k_1,k_2,\cdots,k_m),d)$ \textit{reciprocally-weighted external
difference family} (RWEDF) if
\begin{equation*}
d=\sum_{1\leq i\leq m}\frac{N_i(\delta)}{k_i} \text{ for each }\delta\in G\backslash \{0\},
\end{equation*}
where $$N_i(\delta)\triangleq \left|\left\{(b_i,b_j):b_i\in B_i,\,\,
b_j\in\bigcup_{1\leq t\ne i\leq m}B_t, \text{ and }b_j-b_i=\delta\right\}\right|.$$
\end{definition}
\begin{theorem}[\cite{HP}]\label{theorem_AMD_RWEDF}
A weak $(n,m,a,\rho)$-AMD code $(S,G,\cA,E_u)$ is $R$-optimal with respect to
the bound in Lemma \ref{lemma_R_optimal} if and only
if there exists
an $(n,m,K,a,d)$-RWEDF, where $\rho=\frac{a(m-1)}{m(n-1)}$,
and $d=\frac{a(m-1)}{n-1}$.
\end{theorem}
Clearly, $N_i(\delta)=\sharp\left(\delta, \bigcup_{1\leq j\leq m\atop j\ne i} D(B_j,B_i)\right)$
for $1\leq i\leq m$, and
by Theorem \ref{theorem_AMD_RWEDF} and Corollary \ref{corollary_char}
or Definitions \ref{def_BSWEDF} and \ref{def_RWEDF},
we know that an $(n,m,K,a,d)$-RWEDF is essentially the same as an $(n,m,K,a,\lambda)$-SWEDF, where
$d=\frac{\lambda}{\widetilde{k}}$. Therefore, Theorem \ref{theorem_AMD_BSWEDF} and Corollary
\ref{corollary_char} provide more
combinatorial characterizations for various weak AMD codes $(S,G,\cA,E_u)$.
These results can be viewed as a generalization of Theorem \ref{theorem_AMD_RWEDF}.
As a byproduct, we have the following property
for an $(n,m,K,a,d)$-RWEDF directly from Lemma \ref{lemma_bound_BSWEDF}
and Corollary \ref{corollary_char}. (III).
\begin{corollary}
A necessary condition for the existence of an $(n,m,K,a,d)$-RWEDF, or equivalently
an $R$-optimal weak $(n,m,a,\rho)$-AMD code $(S,G,\cA,E_u)$ with respect to Lemma \ref{lemma_R_optimal},
is $(n-1)\mid (\widetilde{k}a(m-1))$, where $K=(k_1=|A_{s_1}|,k_2=|A_{s_2}|,\cdots,k_m=|A_{s_m}|)$ and
$\widetilde{k}=\lcm(k_1,k_2,\cdots,k_m)$.
\end{corollary}
In Figure \ref{figure AMD}, we summarize the relationships between weak AMD codes and
BSWEDFs, where SO-BSWEDF, O-BSWEDF, and OW-AMD-code denote strongly optimal
BSWEDF, optimal BSWEDF, and $R$-optimal weak AMD-code, respectively.
\begin{figure}[!t]
\centering
\begin{tikzpicture}
\tikzset{venn circle/.style={draw,circle,opacity=1}}
\node [venn circle, minimum width=6cm] (A) at (0,0) {};
\node [venn circle, minimum width=5.3cm] (B) at (0,0.35) {};
\node [venn circle, minimum width=4.2cm ] (C) at (0,0.9) {};
\node [venn circle, minimum width=3cm ] (D) at (0,1.5) {};
\node [] (SWEDF) at (0,1.5cm) {\begin{tabular}{c}
\small SWEDF\\
\small (RWEDF \cite{HP})\\
\end{tabular}};
\node [] (BSWEDF) at (0,-2.6cm) {\small BSWEDF};
\node [] (O-BSWEDF) at (0,-1.65cm) {\small O-BSWEDF};
\node [] (SO-BSWEDF) at (0,-0.55cm) {\small SO-BSWEDF};
\node [venn circle, minimum width=6cm] (A1) at (8,0) {};
\node [venn circle, minimum width=5.3cm] (B1) at (8,0.35) {};
\node [venn circle, minimum width=4.2cm ] (C1) at (8,0.9) {};
\node [venn circle, minimum width=3cm ] (D1) at (8,1.5) {};
\node [] (ROWAMD) at (8,1.5cm) {\begin{tabular}{c}
\small OW-AMD-code\\
(Lemma \ref{lemma_R_optimal})\\
\end{tabular}};
\node [] (AMD) at (8,-2.6cm) {\small W-AMD-code};
\node [] (KOAMD) at (8,-1.65cm) {\begin{tabular}{c}
\small W-AMD-code\\
with $ \rho_{(n,m,K)}$\\
\end{tabular}};
\node [] (OWAMD) at (8,-0.55cm) {\begin{tabular}{c}
\small OW-AMD-code\\
(Corollary \ref{corollary_improved_bound})\\
\end{tabular}};
\draw[black,<->] (ROWAMD) -- (SWEDF) node at (4,1.7cm) {Coro. \ref{corollary_char} (III) };
\node [] (REMARK) at (4,1.3cm) {(Th. 2 \cite{HP})};
\draw[black,<->] (KOAMD) -- (O-BSWEDF) node at (4,-0.4cm) {Coro. \ref{corollary_char} (II)};
\draw[black,<->] (OWAMD) -- (SO-BSWEDF) node at (4,-1.5cm) {Coro. \ref{corollary_char} (I)};
\draw[black,<->] (AMD) -- (BSWEDF) node at (4,-2.4cm) {Th. \ref{theorem_AMD_BSWEDF}};
\end{tikzpicture}
\caption{The relationships between AMD codes and BSWEDFs}
\label{figure AMD}
\end{figure}
\subsection{Among EDFs, SEDFs, PEDFs, SWEDFs, and BSWEDFs}
In general, an EDF is not necessarily an SWEDF. However, in the following cases,
an EDF is always an SWEDF. First of all, we consider the regular case.
\begin{lemma}
A regular $(n,m,k,\lambda)$-EDF forms an $(n,m,K=(k,k,\dots,k),a=mk,\lambda)$-SWEDF.
\end{lemma}
The lemma follows directly from the definitions of EDF and SWEDF.
For the case of GSEDFs we have the following result.
\begin{lemma}
If $\{B_i\,:\, 1\leq i\leq m\}$ is an $(n,m;k_1,k_2,\cdots,k_m;
\lambda_1,\lambda_2,\cdots,\lambda_m)$-GSEDF, then $\{B_i\,:\, 1\leq i\leq m\}$
is an $(n,m,(k_1,k_2,\cdots,k_m),a,\lambda)$-SWEDF, where $\lambda=\sum_{1\leq i\leq m}
\frac{\lambda_i\widetilde{k}}{k_i}$.
\end{lemma}
\begin{proof}
Let $\{B_i\,:\, 1\leq i\leq m\}$ be an $(n,m;k_1,k_2,\cdots,k_m;
\lambda_1,\lambda_2,\cdots,\lambda_m)$-GSEDF, by \eqref{eqn_GSEDF},
\begin{equation*}
\bigcup_{1\leq j\leq m,\,j\ne i}D(B_i,B_j)=\lambda_i \boxtimes (G\backslash \{0\}),
\end{equation*}
which means $$\bigcup_{1\leq j\leq m,\,j\ne i}D(B_j,\widetilde{B}_i)=\frac{\lambda_i\widetilde{k}}{k_i} \boxtimes (G\backslash \{0\}).
$$
Thus, we have $$\bigcup_{1\leq i\leq m}\bigcup_{1\leq j\leq m,\,j\ne i}D(B_j,\widetilde{B}_i)
=\left(\sum_{1\leq i\leq m}\lambda_i\frac{\widetilde{k}}{k_i}\right)\boxtimes (G\backslash \{0\})=\lambda\boxtimes (G\backslash \{0\}),
$$ i.e., $\{B_i\,:\, 1\leq i\leq m\}$
is an $(n,m,(k_1,k_2,\cdots,k_m),a,\lambda)$-SWEDF with $\lambda=\sum_{1\leq i\leq m}
\frac{\lambda_i\widetilde{k}}{k_i}$.
\end{proof}
Similarly, the relationship between PEDFs and SWEDFs can be given by the
following lemma.
\begin{lemma}
If $\{B_i\,:\, 1\leq i\leq m\}$ is an $(n,m;c_1,c_2,\cdots,c_l;w_1,w_2,\cdots,w_l;
\lambda_1,\lambda_2,\cdots,\lambda_l)$-PEDF, then $\{B_i\,:\, 1\leq i\leq m\}$
is an $(n,m,K=(|B_1|,|B_2|,\cdots,|B_m|),a,\lambda)$-SWEDF, where
$\widetilde{k}=\text{lcm}(w_1,w_2,\cdots, w_l)$ and $\lambda=\sum_{1\leq t\leq l}
\frac{\lambda_t\widetilde{k}}{w_t}$.
\end{lemma}
\begin{proof}
Since $\{B_i\,:\, 1\leq i\leq m\}$ is an $(n,m;c_1,c_2,\cdots,c_l;w_1,w_2,\cdots,w_l;
\lambda_1,\lambda_2,\cdots,\lambda_l)$-PEDF, by \eqref{eqn_PEDF},
\begin{equation*}
\bigcup_{\{i\,:\,|B_i|=w_t\}}\bigcup_{1\leq j\leq m,\,j\ne i}D(B_i,B_j)=\lambda_t \boxtimes (G\backslash \{0\})
\end{equation*}
for $1\leq t\leq l$. By Definition \ref{def_nonuniform_PEDF}, $|B_i|\in \{w_j\,:\,1\leq j\leq l\}$
for $1\leq i\leq m$.
Thus, for $K=(|B_1|,|B_2|,\cdots, |B_m|)$, we have
$\widetilde{k}=\text{lcm}(|B_1|,|B_2|,\cdots, |B_m|)=\text{lcm}(w_1,w_2,\cdots, w_l).$
Thus, we have $$\bigcup_{1\leq t\leq l}\bigcup_{\{i\,:\,|B_i|=w_t\}}\bigcup_{1\leq j\leq m,\,j\ne i}D(B_j,\widetilde{B}_i)=\left(\sum_{1\leq t\leq l}\lambda_t\frac{\widetilde{k}}{w_t}\right)\boxtimes (G\backslash \{0\})=\lambda \boxtimes (G\backslash \{0\}),
$$ i.e., $\{B_i\,:\, 1\leq i\leq m\}$
is an $(n,m,K=(|B_1|,|B_2|,\cdots,|B_m|),a,\lambda)$-SWEDF, where $\lambda=\sum_{1\leq t\leq l}
\frac{\lambda_t\widetilde{k}}{w_t}$.
\end{proof}
In what follows, we recall an example of SWEDF which is not an EDF, or an GSEDF, or a PEDF.
\begin{example}[\cite{PS}]
Let $G=(\Z_{10},+)$ and $\cB=\{B_1=\{0\},B_2=\{5\},B_3=\{2,3\},B_4=\{6,4\}\}$. Then
$\widetilde{B}_1=\{0,0\},\widetilde{B}_2=\{5,5\},\widetilde{B}_3=\{2,3\},
\widetilde{B}_4=\{6,4\}$. It is easy to check
$$\bigcup_{1\leq i\leq 4}\bigcup_{1\leq j\leq 4,\, j\ne i}D(B_i,\widetilde{B}_j)
=4\boxtimes (G\backslash\{0\}),$$
$$\bigcup_{1\leq i\leq 4}\bigcup_{1\leq j\leq 4,\, j\ne i}D(B_i,B_j)
\ne \lambda \boxtimes(G\backslash\{0\}),$$
$$\bigcup_{2\leq j\leq 4}D(B_1,B_j)
=\{5,8,7,4,6\}\ne \lambda \boxtimes (G\backslash\{0\}),$$
and
$$\bigcup_{3\leq i\leq 4}\bigcup_{1\leq j\leq 4,\, j\ne i}D(B_i,B_j)
\ne \lambda\boxtimes (G\backslash\{0\}),$$
for any positive integer $\lambda$.
Thus, $\cB$ is an SWEDF which
does not form an EDF, or a GSEDF, or a PEDF.
\end{example}
Similarly, a BEDF is not necessarily a BSWEDF in general and we have the following
relationship between BEDFs and BSWEDFs.
\begin{lemma}
The regular $(n,k,\lambda)$-BEDF forms an $(n,m,K=(k,k,\dots,k),a=mk,\lambda_1)$-BSWEDF, where
$\lambda_1\leq \lambda$.
\end{lemma}
\begin{lemma}
If $\cB=\{B_i\,:\, 1\leq i\leq m\}$ is an $(n,m;k_1,k_2,\cdots,k_m;
\lambda_1,\lambda_2,\cdots,\lambda_m)$-BGSEDF, then $\cB$
is an $(n,m,(k_1,k_2,\cdots,$ $k_m),a=\sum_{1\leq i\leq m}k_i,\lambda)$-BSWEDF, where $\lambda\leq\sum_{1\leq i\leq m}
\frac{\lambda_i\widetilde{k}}{k_i}$.
\end{lemma}
\begin{proof}
Since $\cB=\{B_i\,:\, 1\leq i\leq m\}$ is an $(n,m;k_1,k_2,\cdots,k_m;
\lambda_1,\lambda_2,\cdots,\lambda_m)$-BGSEDF, by \eqref{eqn_BGSEDF},
\begin{equation*}
\bigcup_{1\leq j\leq m,\,j\ne i}D(B_i,B_j)\subseteq \lambda_i \boxtimes (G\backslash \{0\}),
\end{equation*}
which means
\begin{equation}\label{eqn_BGSEDF_BSWEDF}
\bigcup_{1\leq j\leq m,\,j\ne i}D(B_j,\widetilde{B}_i)\subseteq \lambda_i\frac{\widetilde{k}}{k_i} \boxtimes (G\backslash \{0\}).
\end{equation}
Let $\lambda$ be the smallest positive integer such that
\begin{equation*}
\bigcup_{1\leq i\leq m}\bigcup_{1\leq j\leq m,\,j\ne i}D(B_j,\widetilde{B}_i)\subseteq \lambda \boxtimes (G\backslash \{0\}).
\end{equation*}
Thus, by \eqref{eqn_BGSEDF_BSWEDF}, we have $\lambda \leq \sum_{1\leq i\leq m}
\frac{\lambda_i\widetilde{k}}{k_i}$, i.e., $\cB$
is an $(n,m,(k_1,k_2,\cdots,k_m),a=\sum_{1\leq i\leq m}k_i,\lambda)$-BSWEDF.
\end{proof}
\section{Constructions of optimal BSWEDFs and SWEDFs}\label{sec-construction}
In this section, we are going to construct BSWEDFs and SWEDFs, which are generally not EDFs, or GSEDFs, or PEDFs.
We recall a well-known construction of difference families. Let $q=4k+1$ be
a prime power. Let $\alpha$ be a primitive element of $\F_q$,
\begin{equation}\label{eqn_D_2}
D^{2}_i=\{\alpha^{i+2j}\,:\,0\leq j\leq 2k-1\},\,\,\text{for}\,\,i=0,1
\end{equation}
and
\begin{equation}\label{eqn_D_4}
D^{4}_i=\{\alpha^{i+4j}\,:\,0\leq j\leq k-1\},\,\,\text{for}\,\,0\leq i\leq 3.
\end{equation}
It is well-known that $\{D^{2}_0,D^{2}_1\}$ is a $(q,2k,2k-1)$-DF over the additive group of $\F_q$.
\begin{construction} Let $\cS=\{S_1,S_2,S_3\}$ be the family of disjoint subsets $\Z_{2}\times\F_q$ defined as
\begin{equation*}
S_1=\{(0,0),(1,0)\},\,\, S_2=\{0\}\times D^{4}_0\cup \{1\}\times D^{4}_2,\text{ and } S_3=\{0\}\times D^{4}_1\cup \{0\}\times D^{4}_3.
\end{equation*}
\end{construction}
\begin{theorem}\label{theorem_SWEDF}
Let $\cS=\{S_1,S_2,S_3\}$ be the family defined in Construction A. If $k$ is odd,
then $\cS$ is an optimal $(n=2q,m=3,(2,2k,2k),a=4k+2,\lambda=2k+1)$-BSWEDF.
\end{theorem}
Before the proof we list a well-known result about $D^{2}_0$ and $D^{2}_1$.
\begin{lemma}\label{lemma_D_4}
If $k$ is odd, then the family $\{D^{2}_0,
D^{2}_1\}$ satisfies
\begin{equation*}
D\left(D^{2}_0,D^{2}_1\right)\cup D\left(D^{2}_1,D^{2}_0\right)=2k\boxtimes (\F_q\backslash\{0\})
\end{equation*}
and
\begin{equation*}
D\left(D^{4}_0,D^{4}_1\right)\cup D\left(D^{4}_0,D^{4}_3\right)\cup
D\left(D^{4}_1,D^{4}_0\right)\cup D\left(D^{4}_3,D^{4}_0\right)=k\boxtimes (\F_q\backslash\{0\}).
\end{equation*}
\end{lemma}
\begin{proof}
By \eqref{eqn_D_2} and \eqref{eqn_D_4}, we have $D^{2}_0=D^{4}_0\cup D^{4}_2=D^{4}_0\cup (-D^{4}_0)$
and $D^{2}_1=D^{4}_1\cup D^{4}_3=D^{4}_1\cup (-D^{4}_1)$, where $\alpha^{2k}=-1$.
The fact $\{D^2_0,D^2_1\}$ is a $(q,2k,2k-1)$-PDF means that
\begin{equation*}
D\left(D^{2}_0,D^{2}_1\right)\cup D\left(D^{2}_1,D^{2}_0\right)=2k\boxtimes (\F_q\backslash\{0\}).
\end{equation*}
The preceding equality can be rewritten as
\begin{equation*}
\begin{split}
2k\boxtimes (\F_{q}\backslash\{0\})=&D\left(D^{2}_0,D^{2}_1\right)\cup D\left(D^{2}_1,D^{2}_0\right)\\
=&D\left(D^{4}_0\cup (-D^{4}_0),D^{4}_1\cup D^{4}_3\right)\cup D\left(D^{4}_1\cup D^{4}_3,D^{4}_0\cup (-D^{4}_0)\right)\\
=&2\boxtimes \left(D\left(D^{4}_0,D^{4}_1\right)\cup D\left(D^{4}_0,D^{4}_3\right)\cup
D\left(D^{4}_1,D^{4}_0\right)\cup D\left(D^{4}_3,D^{4}_0\right)\right),
\end{split}
\end{equation*}
where for the last equality we use the facts $D(-D^{4}_0,D^{4}_1\cup D^{4}_3)=D(-(D^{4}_1\cup D^{4}_3),D^{4}_0)=D(D^{4}_3\cup D^{4}_1,D^{4}_0)$
and $D\left(D^{4}_1\cup D^{4}_3, -D^{4}_0\right)=D\left(D^4_0,-(D^{4}_1\cup D^{4}_3)\right)=D\left(D^4_0,D^{4}_3\cup D^{4}_1\right)$.
This completes the proof.
\end{proof}
{\textit{Proof of Theorem \ref{theorem_SWEDF}:}}
By Definition \ref{def_BSWEDF}, in this case, $\widetilde{k}=\text{lcm}(2k,2)=2k$,
$\widetilde{S}_1=k\boxtimes \{(0,0),(1,0)\}$,
$\widetilde{S}_2=S_2$, and $\widetilde{S}_3=S_3$.
Thus, $D(S_2,\widetilde{S}_3)=D(S_2,{S_3})$ and
$D(S_3,\widetilde{S}_2)=D(S_3,S_2)$. Recall that $S_2=\{0\}\times D^4_0\cup \{1\}\times(-D^4_0)$,
which implies
\begin{equation}\label{eqn_D_0_D_1}
\begin{split}
&D(S_2,\widetilde{S}_3)\cup D(S_3,\widetilde{S}_2)\\
=&D(\{0\}\times D^4_0\cup \{1\}
\times(-D^4_0),\{0\}\times D^4_1 \cup \{0\}\times D^4_3)\\
&\cup D(\{0\}\times D^4_1\cup \{0\}\times D^4_3,\{0\}\times D^4_0\cup \{1\}\times(-D^4_0))\\
=&\bigcup_{i=0,1}\{i\}\times \left(D\left(D^{4}_0,D^{4}_1\right)\cup D\left(D^{4}_0,D^{4}_3\right)\cup
D\left(D^{4}_1,D^{4}_0\right)\cup D\left(D^{4}_3,D^{4}_0\right)\right)\\
=&k\boxtimes\left(\Z_2\times (\F_{q}\backslash\{0\})\right),
\end{split}
\end{equation}
where we use the fact $D^4_1=-D^4_3$ and the last equality holds by Lemma \ref{lemma_D_4}.
By the fact $\bigcup_{0\leq i\leq 3}D^4_i=\F_q\backslash\{0\}$, we have
\begin{equation*}
\begin{split}
D(S_1,\widetilde{S}_2)\cup D(S_2,\widetilde{S}_1)
=&\{0\}\times D^4_2 \cup \{1\}\times D^4_0 \cup \{1\}\times D^4_2 \cup \{0\}\times D^4_0 \\
&\cup k\boxtimes\left( \{0\}\times D^4_2\cup \{1\}\times D^4_0 \cup \{1\}\times D^4_2\cup \{0\}\times D^4_0\right)\\
=&(k+1)\boxtimes\left(\Z_2\times D^2_{0}\right)
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
D(S_1,\widetilde{S}_3)\cup D(S_3,\widetilde{S}_1)
=&\{0\}\times D^2_1\cup \{1\}\times D^2_1\cup k\boxtimes\left( \{0\}\times D^2_1\cup \{1\}\times D^2_1\right)\\
=&(k+1)\boxtimes\left(\Z_2\times D^2_{1}\right),
\end{split}
\end{equation*}
where we use the facts $D^2_i=D^4_i\cup D^4_{i+2}$ and $D^4_i=-D^4_{i+2}$ for $i=0,1$.
The above two equalities imply that
\begin{equation}\label{eqn_S_1_S_2}
\bigcup_{i=2,3} \left(D(S_1,\widetilde{S}_i)\cup D(S_i,\widetilde{S}_1)\right)=(k+1)\boxtimes\left(\Z_2\times(\F_q\backslash \{0\})\right).
\end{equation}
Therefore, by \eqref{eqn_D_0_D_1} and \eqref{eqn_S_1_S_2},
$$\bigcup_{1\leq i\ne j\leq 3}D(S_i,\widetilde{S}_j)=(2k+1)\boxtimes
\left(\Z_2\times(\F_q\backslash \{0\})\right)\subseteq (2k+1)\boxtimes \left((\Z_2\times\F_q)\backslash \{(0,0)\}\right),$$
i.e., $\cS=\{S_1,S_2,S_3\}$ is an $(n=2q,m=3,(2,2k,2k),a=4k+2,\lambda=2k+1)$-BSWEDF.
By Lemma \ref{lemma_bound_BSWEDF}, we have $$\lambda\geq \left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil=\left\lceil\frac{2k(4k+2)2}{2q-1}\right\rceil=\left\lceil\frac{2k(8k+1)+6k}{8k+1}\right\rceil=2k+1.$$
Thus, $\cS$ is an optimal $(n=2q,m=3,(2,2k,2k),a=4k+2,\lambda=2k+1)$-BSWEDF.
\qed
It is easily seen from the proof of Theorem \ref{theorem_SWEDF} that the above BSWEDFs are not EDFs, or GSEDFs, or PEDFs.
\begin{example}
Let $n=2q=26$. By Construction A, the family of sets $\cS=\{S_1,S_2,S_3\}$ over $\Z_{26}$
can be listed as
\begin{equation*}
S_1=\{0,13\},\,\, S_2=\{14,16,22,17,25,23\},\,\, \text{and}\,\, S_3=\{2,6,18,8,24,20\}.
\end{equation*}
It is easy to check that
\begin{equation*}
\bigcup_{1\leq i\ne j\leq 3}D(S_i,\widetilde{S}_j)=7\boxtimes (\Z_{26}\backslash\{0,13\}),
\end{equation*}
which means that $\cS$ is an optimal $(26,3,(2,6,6),14,7)$-BSWEDF.
\end{example}
Let $n_1=2k+1$ and $\{\{0\},E_1,E_2\}$ be an $(n_1,k,k-1)$-PDF over
an Abelian group $G$ of order $n_1$.
Such kinds of PDFs exist, for example, when $n_1$ is a prime power, and $E_1=D^2_0,\,E_2=D^2_1$.
Based on
$\{\{0\},E_1,E_2\}$ we can construct a BSWEDF as follows.
\begin{construction}
Let $\cW=\{W_1,W_2,W_3\}$ be the family of disjoint subsets of $\Z_2\times G$,
defined as $W_1=\{(1,0)\}$, $W_2=\{0\}\times E_1$, and $W_3=\{0\}\times E_2$.
\end{construction}
\begin{theorem}\label{theorem_cons_B}
The family $\cW=\{W_1,W_2,W_3\}$ generated by Construction B is an
optimal $(n=2n_1,3,(1,k,k),2k+1,k+1)$-BSWEDF.
\end{theorem}
\begin{proof}
The fact that $\{\{0\},E_1,E_2\}$ is an $(n_1=2k+1,k,k-1)$-PDF means that
$D(E_1,E_2)\cup D(E_2,E_1)=k\boxtimes (G\backslash\{0\})$.
Thus, we have
\begin{equation*}
D(W_2,\widetilde{W}_3)\cup D(W_3,\widetilde{W}_2)=D(W_2,W_3)\cup D(W_3,W_2)=k \boxtimes (\{0\}\times(G\backslash\{0\})),
\end{equation*}
where we apply the fact $\widetilde{k}=\text{lcm}(1,k,k)=k=|W_2|=|W_3|$.
Note that
\begin{equation*}
\begin{split}
&D(W_1,\widetilde{W}_2)\cup D(W_1,\widetilde{W}_3)\cup D(W_3,\widetilde{W}_1)\cup D(W_2,\widetilde{W}_1)\\
=&\{1\}\times (-E_1)\cup \{1\}\times (-E_2)\cup D(\{0\}\times E_1,k\boxtimes \{(1,0)\})\cup D(\{0\}\times E_2,k\boxtimes \{(1,0)\})\\
=&(k+1)\boxtimes (\{1\}\times (G\backslash \{0\})).
\end{split}
\end{equation*}
Based on the above two equalities,
\begin{equation*}
\bigcup_{1\leq i\ne j\leq 3} D(W_i,\widetilde{W}_j)\subseteq (k+1)\boxtimes((\Z_2\times G)\backslash\{(0,0)\} ),
\end{equation*}
i.e., $\cW$ is an $(n=2n_1,m=3,(1,k,k),a=2k+1,\lambda=k+1)$-BSWEDF.
By Lemma \ref{lemma_bound_BSWEDF}, we have $$\lambda\geq \left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil=\left\lceil\frac{k(2k+1)2}{2n_1-1}\right\rceil=\left\lceil\frac{k(4k+1)+k}{4k+1}\right\rceil=k+1.$$
Thus, $\cW$ is an optimal $(2n_1=4k+2,3,(1,k,k),2k+1,k+1)$-BSWEDF.
\end{proof}
It is easily seen from the proof of Theorem \ref{theorem_cons_B} that the above BSWEDFs are not EDFs, or GSEDFs, or PEDFs.
\begin{example}
Let $n=2n_1=22$. By Construction B, the family of sets $\cW=\{W_1,W_2,W_3\}$ over $\Z_{22}$
can be listed as
\begin{equation*}
W_1=\{11\},\,\, W_2=\{12, 4, 16, 20, 14\},\,\, \text{and}\,\, W_3=\{2, 8, 10, 18, 6\}.
\end{equation*}
It is easy to check that
\begin{equation*}
\bigcup_{1\leq i\ne j\leq 3}D(W_i,\widetilde{W}_j)\subseteq 6\boxtimes (Z_{22}\backslash\{0\}),
\end{equation*}
which means that $\cW$ is an optimal $(22,3,(1,5,5),11,6)$-BSWEDF.
\end{example}
\begin{construction}
Let $q=4k+1$ be a prime power and let $\cU=\{U_1,U_2,U_3,U_4\}$ be the family of disjoint subsets of $\Z_3\times\F_q$,
defined as $U_1=\{(1,0)\}$, $U_2=\{(2,0)\}$, $U_3=\{0\}\times D^2_0$, and
$U_4=\{0\}\times D^2_1$.
\end{construction}
\begin{theorem}\label{theorem_cons_C}
The family $\cU=\{U_1,U_2,U_3,U_4\}$ in Construction C is an
optimal $(3q=12k+3,4,(1,1,2k,2k),4k+2,2k+1)$-BSWEDF.
\end{theorem}
\begin{proof}
Note that $\widetilde{k}=\text{lcm}(1,1,2k,2k)=2k$, which implies $\widetilde{U}_3=U_3$
and $\widetilde{U}_4=U_4$.
Lemma \ref{lemma_D_4} shows that
$D(D^{2}_0,D^{2}_1)\cup D(D^{2}_1,D^{2}_0)=2k\boxtimes (\F_q\backslash\{0\})$.
Thus, we have
\begin{equation*}
D(U_3,\widetilde{U}_4)\cup D(U_4,\widetilde{U}_3)=D(U_3,U_4)\cup D(U_3,U_4)=2k \boxtimes (\{0\}\times(\F_q\backslash\{0\})).
\end{equation*}
Recall that
\begin{equation*}
\begin{split}
&D(U_1,\widetilde{U}_3)\cup D(U_1,\widetilde{U}_4)\cup D(U_3,\widetilde{U}_1)\cup D(U_4,\widetilde{U}_1)\\
=&(\{1\}\times D^2_0)\cup (\{1\}\times D^2_1)\cup D(\{0\}\times D^2_0,2k\boxtimes \{(1,0)\})\cup D(\{0\}\times D^2_1,2k\boxtimes \{(1,0)\})\\
=&(\{1\}\times (\F_q\backslash \{0\}))\cup 2k\boxtimes (\{2\}\times (\F_q\backslash \{0\}))
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
&D(U_2,\widetilde{U}_3)\cup D(U_2,\widetilde{U}_4)\cup D(U_3,\widetilde{U}_2)\cup D(U_4,\widetilde{U}_2)\\
=&\{2\}\times D^2_0\cup \{2\}\times D^2_1\cup D(\{0\}\times D^2_0,2k\boxtimes \{(2,0)\})\cup D(\{0\}\times D^2_1,2k\boxtimes \{(2,0)\})\\
=&(\{2\}\times (\F_q\backslash \{0\}))\cup 2k\boxtimes (\{1\}\times (\F_q\backslash \{0\})).
\end{split}
\end{equation*}
For the differences between $U_1$ and $U_2$, we have
$$D(U_1,\widetilde{U}_2)\cup D(U_2,\widetilde{U}_1)=2k\boxtimes\{(1,0),(2,0)\}.$$
Therefore, the above four equalities mean that
\begin{equation*}
\begin{split}
&\bigcup_{1\leq i\ne j\leq 4} D(U_i,\widetilde{U}_j)\\
=&(2k\boxtimes\{(1,0),(2,0)\})\cup (2k\boxtimes \{0\}\times (\F_q\backslash\{0\}))\cup ((2k+1)\boxtimes \{1,2\}\times (\F_q\backslash\{0\}))\\
\subseteq& (2k+1)\boxtimes((\Z_3\times\F_q)\backslash\{(0,0)\} ),
\end{split}
\end{equation*}
i.e., $\cU$ is an $(n=3q,m=4,(1,1,2k,2k),a=4k+2,\lambda=2k+1)$-BSWEDF.
By Lemma \ref{lemma_bound_BSWEDF}, we have $$\lambda\geq \left\lceil\frac{\widetilde{k}a(m-1)}{n-1}\right\rceil=\left\lceil\frac{2k(4k+2)3}{3q-1}\right\rceil=\left\lceil\frac{2k(12k+2)+8k}{12k+2}\right\rceil=2k+1.$$
Thus, $\cU$ is an optimal $(3q,4,(1,1,2k,2k),4k+2,2k+1)$-BSWEDF.
\end{proof}
It is easily seen from the proof of Theorem \ref{theorem_cons_C} that the above BSWEDFs are not EDFs, or GSEDFs, or PEDFs.
\begin{example}
Let $n=3q=39$. By Construction A, the family of sets $\cU=\{U_1,U_2,U_3,U_4\}$ over $\Z_{39}$
can be listed as
\begin{equation*}
U_1=\{13\},\,\, U_2=\{26\},\,\, U_3=\{27, 30, 3, 12, 9, 36\},\,\,\text{and}\,\, U_4=\{15, 21, 6, 24, 18, 33\}.
\end{equation*}
It is easy to check that
\begin{equation*}
\bigcup_{1\leq i\ne j\leq 4}D(U_i,\widetilde{U}_j)\subseteq 7\boxtimes (\Z_{39}\backslash\{0\}),
\end{equation*}
which means that $\cU$ is an optimal $(39,4,(1,1,6,6),14,7)$-BSWEDF.
\end{example}
\subsection{A construction of cyclic SWEDFs}
In this subsection, we are going to construct cyclic SWEDFs, which are not
regular EDFs, or GSEDFs, or PEDFs. A \textit{cyclic} SWEDF means an SWEDF over a cyclic additive group.
A well-studied kind of PDFs $\cR=\{R_1,R_2,\cdots,R_l\}$ are those
with parameters $(n=(k-1)(tk+1),(k,\cdots,k,k-1),k-1)$
over $\Z_{n}=\Z_{k-1}\times\Z_{tk+1}$ where ${\rm gcd}(k-1,tk+1)=1$,
$R_l=\Z_{k-1}\times\{0\}$ and $l=t(k-1)+1$.
In Table \ref{tab PDF}, we list such PDFs which can be applied in the following
construction.
\begin{table}
\centering
\begin{threeparttable}[b]
\caption{Some known PDFs with parameters $(n,\mathcal{W}=(k^{\frac{n-k+1}{k}},(k-1)^1),k-1)$\label{tab PDF}} \center
\begin{tabular}{|c|c|c|}
\hline
Parameters & Constraints &Ref.\\
\hline
\hline $\left(2v,\,(3^{\frac{2v-2}{3}},2^1),\,2 \right)$, & $\begin{array}{c}v=p_1^{m_1}p_2^{m_2}\cdots p_r^{m_r},\,2<p_1<p_2<\cdots<p_r,\\
{\rm and }\,\,3|(p_{t}-1)\,{\rm for }\, 1\leq t\leq r\end{array}$&{\cite{BYW2010}}\\
\hline $\left(sv,\,((s+1)^\frac{sv-s}{s+1},s^1),\,s \right)$& $\begin{array}{c}v=p_1^{m_1}p_2^{m_2}\cdots p_r^{m_r},\,2<p_1<p_2<\cdots<p_r,\\
{\rm and }\,\,2(s+1)|(p_{t}-1)\,{\rm for }\, 1\leq t\leq r, s=4,5 \end{array}$&{\cite{BYW2010}}\\
\hline $\left(6v,\,(7^\frac{6v-6}{7},6^1),\,6 \right)$& $\begin{array}{c}v=p_1^{m_1}p_2^{m_2}\cdots p_r^{m_r},\,2<p_1<p_2<\cdots<p_r,\\
{\rm and }\,\,28|(p_{t}-1)\,{\rm for }\, 1\leq t\leq r \end{array}$&{\cite{BYW2010}}\\
\hline $\left(7v,\,(8^\frac{7v-7}{8},7^1),\,7 \right)$& $\begin{array}{c}v=p_1^{m_1}p_2^{m_2}\cdots p_r^{m_r},\,2<p_1<p_2<\cdots<p_r,\\
{\rm and }\,\,8|(p_{t}-1)\,{\rm for }\, 1\leq t\leq r, v\not\in \{17, 89\} \end{array}$&{\cite{BYW2010}}\\
\hline $(q-1,(\frac{q}{d}^{d-1},(\frac{q}{d}-1)^1),{q-d\over d})$&$d|q$, $\text{gcd}(\frac{q}{d}-1,(q-1)/(\frac{q}{d}-1))=1$&{\cite{D2008}}\\
\hline
\end{tabular}
\begin{tablenotes}
\item[] {Herein $p_i$'s are primes; $t$, $s$, $r$ and $m$ are positive integers; $q$ is a prime power.}
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{construction}\label{cons_SWEDF}
Let $\cV=\{V_1,V_2,\cdots,V_{t(k-1)+k-2}\}$ be the family of disjoint subsets of $\Z_n$,
defined as
$$V_i=R_i \quad \text{for }1\leq i\leq t(k-1),$$
\begin{equation*}
V_{t(k-1)+j}=\{(j,0)\}\text{ for }1\leq j\leq k-2.
\end{equation*}
\end{construction}
\begin{theorem}\label{theorem_cons_SWEDF}
Let $\cV$ be the family in Construction D. Then $\cV$
is a cyclic $(n,t(k-1)+k-2,K=(k,\cdots,k,1,1,\cdots,1),n-1,(t+1)k^2-(t+3)k)$-SWEDF,
where the element $1$ appears $k-2$ times and the element $k$ appears $t(k-1)$
times in $K$.
\end{theorem}
\begin{proof}
Since $\cR$ is an $(n=(k-1)(tk+1),(k,\cdots,k,k-1),k-1)$ PDF, we can conclude that
\begin{equation*}
\bigcup_{1\leq i\ne j\leq l} D(R_i,R_j)=(n-k+1)\boxtimes ((\Z_{k-1}\times \Z_{tk+1})\backslash\{(0,0)\}).
\end{equation*}
Recall that $R_l=\Z_{k-1}\times\{0\}$, which means
\begin{equation*}
\bigcup_{1\leq i\leq l-1}(D(R_i,R_l)\cup D(R_l,R_i))=(2k-2)\boxtimes (\Z_{k-1}\times(\Z_{tk+1}\backslash\{0\})).
\end{equation*}
Thus, by Construction D, we have
\begin{equation}\label{eqn_V_diff_1}
\begin{split}
&\bigcup_{1\leq i\ne j\leq l-1}D(V_i,\widetilde{V}_j)=\bigcup_{1\leq i\ne j\leq l-1}D(V_i,V_j)=
\bigcup_{1\leq i\ne j\leq l-1}D(R_i,R_j)\\
=&\left(\bigcup_{1\leq i\ne j\leq l} D(R_i,R_j)\right)\backslash \left(\bigcup_{1\leq i\leq l-1}(D(R_i,R_l)\cup D(R_l,R_i))\right)\\
=&\left((n-k+1)\boxtimes((\Z_{k-1}\backslash\{0\})\times \{0\})\right)\cup \left((n-3k+3)\boxtimes (\Z_{k-1}\times(\Z_{tk+1}\backslash\{0\}))\right),
\end{split}
\end{equation}
where we use the fact $\widetilde{k}=k$.
Note that for any $1\leq j\leq k-2$,
\begin{equation*}
\begin{split}
&\bigcup_{1\leq i\leq l-1}(D(V_i,\widetilde{V}_{l-1+j})\cup D(V_{l-1+j},\widetilde{V}_i))\\
=&\bigcup_{1\leq i\leq l-1}(D(R_i,k\boxtimes \{(j,0)\})\cup D(\{(j,0)\},R_i))=
(k+1)\boxtimes (\Z_{k-1}\times(\Z_{tk+1}\backslash\{0\})).
\end{split}
\end{equation*}
Thus, we have
\begin{equation}\label{eqn_V_diff_2}
\bigcup_{1\leq j\leq k-2}\bigcup_{1\leq i\leq l-1}(D(V_i,\widetilde{V}_{l-1+j})\cup D(V_{l-1+j},\widetilde{V}_i))=
((k+1)(k-2))\boxtimes (\Z_{k-1}\times(\Z_{tk+1}\backslash\{0\})).
\end{equation}
For the last part of external differences, we have
\begin{equation}\label{eqn_V_diff_3}
\begin{split}
\bigcup_{1\leq i\ne j\leq k-2}D(V_{l-1+i},\widetilde{V}_{l-1+j})
=&\bigcup_{1\leq i\ne j\leq k-2}D(\{(i,0)\},k\boxtimes\{(j,0)\})\\
=&k\boxtimes\left(\bigcup_{1\leq i\ne j\leq k-2}D(\{(i,0)\},\{(j,0)\})\right)\\
=&k(k-3)\boxtimes ((\Z_{k-1}\backslash\{0\})\times \{0\}).
\end{split}
\end{equation}
Combining \eqref{eqn_V_diff_1}, \eqref{eqn_V_diff_2} and \eqref{eqn_V_diff_3},
\begin{equation*}
\begin{split}
&\bigcup_{1\leq i\ne j\leq l+k-3}D(V_i,\widetilde{V}_j)\\
=&\left(\bigcup_{1\leq i\ne j\leq l-1}D(V_i,\widetilde{V}_j)\right)\cup
\left(\bigcup_{1\leq j\leq k-2}\bigcup_{1\leq i\leq l-1}(D(V_i,\widetilde{V}_{l-1+j})\cup D(V_{l-1+j},\widetilde{V}_i))\right)\cup
\left(\bigcup_{1\leq i\ne j\leq k-2}D(V_{l-1+i},\widetilde{V}_{l-1+j})\right)\\
=&\left((n-k+1+k(k-3))\boxtimes (\Z_{k-1}\backslash\{0\})\times \{0\}\right)\cup
\left((n-3k+3+(k+1)(k-2))\boxtimes (\Z_{k-1}\times(\Z_{tk+1}\backslash\{0\}))\right)\\
=&((t+1)k^2-tk-3k)\boxtimes ((\Z_{k-1}\times \Z_{tk+1})\backslash\{(0,0)\}),
\end{split}
\end{equation*}
where $n=(k-1)(tk+1)$.
Therefore, $\cV$ is a cyclic $(n,t(k-1)+k-2,(k,k,\cdots,k,1,1,\cdots,1),n-1,(t+1)k^2-(t+3)k)$-SWEDF,
where the element $1$ occurs $k-2$ times in $K$ and the element $k$ appears $t(k-1)$
times in $K$. This completes the proof.
\end{proof}
It is easily seen from the proof of Theorem \ref{theorem_cons_SWEDF} that the above SWEDFs are not regular EDFs, or GSEDFs, or PEDFs.
In \cite{HP}, Huczynska and Paterson introduced some constructions of SWEDFs (or equivalently, RWSEDs)
with the so-called bimodal property.
\begin{definition}[\cite{HP}]\label{def_bimodal}
Let $G$ be a finite Abelian group and $\cB$ be a collection $B_1, B_2,\dots, B_m$
of disjoint subsets of $G$ with sizes $k_1, k_2,\dots,k_m$, respectively.
We say that $\cB$ has the \textit{bimodal property} if for each $\delta\in G\backslash\{0\}$
we have $N_i(\delta)\in\{0, k_i\}$ for $1\leq i\leq m$, where
$N_i(\delta)$
is defined in Definition \ref{def_RWEDF}.
\end{definition}
The SWEDF generated by Construction D does not have the bimodal property.
Let $\cV$ be the SWEDF generated by Construction D. For any $v\in V_i$ with
$|V_i|=k$,
we have $0\in D(V_i,\{v\})$ and $|D(V_i,\{v\})|=|V_i|=k$. However, by Construction D,
$0$ is not an element of $V_j$ for $1\leq j\leq l+k-3$. Thus, the number
of solutions for $a-b=v$ for $a\in V_i$ and $b\in V_j$ for $1\leq j\leq l+k-3$
and $j\ne i$ is at most $k-1$, since $\bigcup_{1\leq j\leq l+k-3}V_j=\Z_n\backslash\{0\}$, i.e.,
$N_i(v)\leq k-1$.
Next, we show that there exists $V_i$ with $|V_i|=k$ satisfying $N_i(v)\ne 0$.
If $a-b\ne v$ for all $a\in V_i$ and $b\in V_j$ for $1\leq j\leq l+k-3$
and $j\ne i$, then $a\in V_i$ means that $(a+\langle v\rangle)\backslash\{0\}\subseteq V_i$.
This is to say that $V_i$ is the union of some cosets of $\langle v\rangle$ besides the
element $0$ and $k=\tau|\langle v\rangle|-1$ for some integer $\tau\geq 1$.
This is impossible since there are elements $v$ with
$|\langle v\rangle|>k+1$ in $\Z_n\backslash\{0\}$. Thus, the SWEDF generated by
Construction D is not bimodal. For more details about SWEDFs (or equivalently, RWEDFs) with bimodal property
the reader may refer to \cite{HP,HP2019}.
Compared with the constructions in \cite{HP}, Construction \ref{cons_SWEDF}
can generate RWEDFs with flexible parameters without bimodal property. To the
best of our knowledge, this is the first class of RWEDFs without the bimodal property,
which are not regular EDFs, or GSEDFs, or PEDFs.
\begin{corollary}
Let $\cV$ be the family in Construction D. Then $\cV$
is an $(n,t(k-1)+k-2,K=(k,\cdots,k,1,1,\cdots,1),n-1,(t+1)k-t-3)$-RWEDF without
the bimodal property,
where the element $1$ appears $k-2$ times and the element $k$ appears $t(k-1)$
times in $K$.
\end{corollary}
\begin{example}
Let $G=(\Z_{15},+)$ and $\cR=\{R_1=\{6,9,2,8\},R_2=\{11,14,7,13\},R_3=\{1,4,12,3\}, R_4=\{0,5,10\}\}$.
It is easy to check
that $\cR$ is a PDF with parameters $(15,(4,4,4,3),3)$. By Construction D, we generate a family of subsets of $\Z_{15}$
as
$\cV=\{V_1=\{6,9,2,8\},V_2=\{11,14,7,13\},V_3=\{1,4,12,3\},V_4=\{5\},V_5=\{10\}\}$. It is easy to check
that
$$\bigcup_{1\leq i\ne j\leq 5}D(V_i,\widetilde{V}_j)=16\boxtimes (\Z_{15}\backslash\{0\}),$$
i.e., $\cV$ is a $(15,5,(4,4,4,1,1),14,16)$-SWEDF (or $(15,5,(4,4,4,1,1),14,4)$-RWEDF).
Note that $N_3(6)=3\not\in\{0,4\}$,
which means the SWEDF does not have the bimodal property by Definition \ref{def_bimodal}.
\end{example}
\section{Concluding Remarks}\label{sec-conclusion}
In this paper, we first characterized weak algebraic manipulation detection
codes via bounded standard weighted external difference families (BSWEDFs). As
a byproduct, we improved the known lower bound for weak algebraic manipulation
detection codes. To generate optimal weak AMD codes, constructions for BSWEDFs, especially, a
construction of SWEDFs without the bimodal property, were introduced.
Combinatorial structures, e.g., BSWEDFs, SWEDFs, strong external difference
families (SEDFs), partitioned external difference families (PEDFs), play a key
role in the constructions of weak algebraic manipulation detection (AMD) codes.
There are some known results for the existence of SEDFs. However, the
existence of BSWEDFs, SWEDFs, and PEDFs are generally open.
Finding more explicit constructions for such combinatorial structures are not only
an interesting subject for AMD codes but also an interesting problem in their
own right, which is left for future research.
\section*{acknowledgements}
The authors would like to thank Prof. Marco Buratti for the helpful discussion
about difference families. This research is supported by JSPS
Grant-in-Aid for Scientific Research (B) under Grant No. 18H01133. | {"config": "arxiv", "file": "1905.01412.tex"} |
TITLE: If $p:A\to B$ and $q:C\to D$ are quotient maps, $B$ and $C$ locally compact, separable spaces, is $p\times q$ a quotient map?
QUESTION [2 upvotes]: It is a true or false question from an old test.
At first I tried some counterexamples, using the line with two origins or taking $B$ as a quotient space of the real line by some not-open subset, since I know the result is true if $B$ and $C$ are also Hausdorff spaces. It did not work.
I then tried to prove it instead, showing that $(p\times q)^{-1}(U)$ open implies $U$ open, but I can't work out how to use both hypothesis, or any other way to reach the answer.
Thanks for the help!
REPLY [0 votes]: It also works for locally compact spaces which are not Hausdorff. If $Z$ is a locally compact space and $p:X\to Y$ is a quotient map, then $p\times\mathbf 1_Z:X\times Z\to Y\times Z$ is a quotient map. One way to show this is by verifying the universal property of quotient maps, and here you have to use the adjunction $\mathbf{Top}(Y\times Z,W)\cong\mathbf{Top}\left(Y,W^Z\right)$, which holds when $Z$ is locally compact. One can also show directly that $p\times\mathbf 1_Z$ is a quotient map, this is done by Ronald Brown in his book Topology and Groupoids on page 109. | {"set_name": "stack_exchange", "score": 2, "question_id": 1118184} |
\section{Systematic studies}
\subsection{Continuous Time Models}
\subsubsection{Example (Burgers' Equation)}
As an example, let us consider the Burgers' equation. This equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow \cite{basdevant1986spectral}. It is a fundamental partial differential equation and can be derived from the Navier-Stokes equations for the velocity field by dropping the pressure gradient term.
For small values of the viscosity parameters, Burgers' equation can lead to shock formation that is notoriously hard to resolve by classical numerical methods. In one space dimension the Burger's equation along with Dirichlet boundary conditions reads as
\begin{eqnarray}\label{eq:Burgers}
&& u_t + u u_x - (0.01/\pi) u_{xx} = 0,\ \ \ x \in [-1,1],\ \ \ t \in [0,1],\\
&& u(0,x) = -\sin(\pi x),\nonumber\\
&& u(t,-1) = u(t,1) = 0.\nonumber
\end{eqnarray}
Let us define $f(t,x)$ to be given by
\[
f := u_t + u u_x - (0.01/\pi) u_{xx},
\]
and proceed by approximating $u(t,x)$ by a deep neural network. To highlight the simplicity in implementing this idea we have included a Python code snippet using Tensorflow \cite{abadi2016tensorflow}; currently one of the most popular and well documented open source libraries for machine learning computations. To this end, $u(t,x)$ can be simply defined as
\begin{lstlisting}[language=Python]
def neural_net(H, weights, biases):
for l in range(0,num_layers-2):
W = weights[l]; b = biases[l]
H = tf.tanh(tf.add(tf.matmul(H, W), b)) # tanh(HW + b)
W = weights[-1]; b = biases[-1]
H = tf.add(tf.matmul(H, W), b) # HW + b
return H
def u(t, x):
u = neural_net(tf.concat([t,x],1), weights, biases)
return u
\end{lstlisting}
Correspondingly, the \emph{physic informed neural network} $f(t,x)$ takes the form
\begin{lstlisting}[language=Python]
def f(t, x):
u = u(t, x)
u_t = tf.gradients(u, t)[0]
u_x = tf.gradients(u, x)[0]
u_xx = tf.gradients(u_x, x)[0]
f = u_t + u*u_x - (0.01/tf.pi)*u_xx
return f
\end{lstlisting}
The shared parameters between the neural networks $u(t,x)$ and $f(t,x)$ can be learned by minimizing the mean squared error loss
\begin{equation}\label{eq:MSE_Burgers_CT_inference}
MSE = MSE_u + MSE_f,
\end{equation}
where
\[
MSE_u = \frac{1}{N_u}\sum_{i=1}^{N_u} |u(t^i_u,x_u^i) - u^i|^2,
\]
and
\[
MSE_f = \frac{1}{N_f}\sum_{i=1}^{N_f}|f(t_f^i,x_f^i)|^2.
\]
Here, $\{t_u^i, x_u^i, u^i\}_{i=1}^{N_u}$ denote the initial and boundary training data on $u(t,x)$ and $\{t_f^i, x_f^i\}_{i=1}^{N_f}$ specify the collocations points for $f(t,x)$. The loss $MSE_u$ corresponds to the initial and boundary data while $MSE_f$ enforces the structure imposed by equation \eqref{eq:Burgers} at a finite set of collocation points.
In all benchmarks considered in this work, the total number of training data $N_u$ is relatively small (a few hundred up to a few thousand points), and we chose to optimize all loss functions using
using L-BFGS; a quasi-Newton, full-batch gradient-based optimization algorithm \cite{liu1989limited}. For larger data-sets a more computationally efficient mini-batch setting can be readily employed using stochastic gradient descent and its modern variants \cite{goodfellow2016deep}. Despite the fact that this procedure is only guaranteed to converge to a local minimum, our empirical evidence indicates that, if the given partial differential equation is well-posed and its solution is unique, our method is capable of achieving good prediction accuracy given a sufficiently expressive neural network architecture and a sufficient number of collocation points $N_f$.
This general observation will be quantified by specific sensitivity studies that accompany the numerical examples presented in the following.
Figure~\ref{fig:Burgers_CT_inference} summarizes our results for the
data-driven solution of the Burgers equation. Specifically, given a set of $N_u = 100$ randomly distributed initial and boundary data, we learn the latent solution $u(t,x)$ by training all ? parameters of a 9-layer deep neural network using the mean squared error loss of \eqref{eq:MSE_Burgers_CT_inference}. Each hidden layer contained $20$ neurons and a hyperbolic tangent activation function. In general, the neural network should be given sufficient approximation capacity in order to accommodate the anticipated complexity of $u(t,x)$. However, in this example, our choice aims to highlight the robustness of the proposed method with respect to the well known issue of over-fitting. Specifically, the term in $MSE_f$ in equation \eqref{eq:MSE_Burgers_CT_inference} acts as a regularization mechanism that penalizes solutions that do not satisfy equation \eqref{eq:Burgers}. Therefore, a key property of {\em physics informed neural networks} is that they can be effectively trained using small data sets; a setting often encountered in the study of physical systems for which the cost of data acquisition may be prohibitive.
The top panel of Figure~\ref{fig:Burgers_CT_inference} shows the predicted spatio-temporal solution $u(t,x)$, along with the locations of the initial and boundary training data. We must underline that, unlike any classical numerical method for solving partial differential equations, this prediction is obtained without any sort of discretization of the spatio-temporal domain. The exact solution for this problem is analytically available \cite{basdevant1986spectral}, and the resulting prediction error is measured at $6.7 \cdot 10^{-4}$ in the relative $\mathbb{L}_2$-norm. Note that this error is about two orders of magnitude lower than the one reported in our previous work on data-driven solution of partial differential equation using Gaussian processes \cite{raissi2017numerical}. A more detailed assessment of the predicted solution is presented in the bottom panel of Figure~\ref{fig:Burgers_CT_inference}. In particular, we present a comparison between the exact and the predicted solutions at different time instants $t=0.25,0.50,0.75$.
Using only a handful of initial data, the {\em physics informed neural network} can accurately capture the intricate nonlinear behavior of the Burgers equation that leads to the development of sharp internal layer around $t = 0.4$. The latter is notoriously hard to accurately resolve with classical numerical methods and requires a laborious spatio-temporal discretization of Eq.~\eqref{eq:Burgers}.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_CT_inference.pdf}
\caption{{\em Burgers equation:} {\it Top:} Predicted solution $u(t,x)$ along with the initial and boundary training data. In addition we are using 10,000 collocation points generated using a Latin Hypercube Sampling strategy. {\it Bottom:} Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. The relative $\mathbb{L}_{2}$ error for this case is $6.7 \cdot 10^{-4}$, with model training taking approximately 60 seconds on one NVIDIA Titan X GPU.}
\label{fig:Burgers_CT_inference}
\end{figure}
To further analyze the performance of our method, we have performed systematic study to quantify its predictive accuracy for different number of training and collocation points, as well as for different neural network architectures. In Table~\ref{tab:Burgers_CT_inference_1} we report the resulting relative $\mathbb{L}_{2}$ error for different number of initial and boundary training data $N_u$ and different number of collocation points $N_f$, while keeping the 9-layer network architecture fixed. The general trend shows increased prediction accuracy as the total number of training data $N_u$ is increased, given a sufficient number of collocation points $N_f$. This observation highlights the key strength of {\em physics informed neural networks}: by encoding the structure of the underlying physical law through the collocation points $N_f$ one can obtain a more accurate and data-efficient learning algorithm.
\footnote{Note that the case $N_f = 0$ corresponds to a standard neural network model, i.e. a neural network that does not take into account the underlying governing equation.}
Finally, Table~\ref{tab:Burgers_CT_inference_2} shows the resulting relative $\mathbb{L}_{2}$ for different number of hidden layers, and different number of neurons per layer, while the total number of training and collocation points is kept fixed to
$N_u = 100$ and $N_f=10000$, respectively. As expected, we observe that as the number of layers and neurons is increased (hence the capacity of the neural network to approximate more complex functions), the predictive accuracy is increased.
\begin{table}
\label{tab:Burgers_CT_inference_1}
\centering
\begin{tabular}{|l||cccccc|}
\hline
\diagbox{$N_u$}{$N_f$} & 2000 & 4000 & 6000 & 7000 & 8000 & 10000 \\ \hline\hline
20 & 2.9e-01 & 4.4e-01 & 8.9e-01 & 1.2e+00 & 9.9e-02 & 4.2e-02 \\
40 & 6.5e-02 & 1.1e-02 & 5.0e-01 & 9.6e-03 & 4.6e-01 & 7.5e-02 \\
60 & 3.6e-01 & 1.2e-02 & 1.7e-01 & 5.9e-03 & 1.9e-03 & 8.2e-03 \\
80 & 5.5e-03 & 1.0e-03 & 3.2e-03 & 7.8e-03 & 4.9e-02 & 4.5e-03 \\
100 & 6.6e-02 & 2.7e-01 & 7.2e-03 & 6.8e-04 & 2.2e-03 & 6.7e-04 \\
200 & 1.5e-01 & 2.3e-03 & 8.2e-04 & 8.9e-04 & 6.1e-04 & 4.9e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative $\mathbb{L}_{2}$ error between the predicted and the exact solution $u(t,x)$ for different number of initial and boundary training data $N_u$, and different number of collocation points $N_f$. Here, the network architecture is fixed to 9 layers with 20 neurons per hidden layer.}
\end{table}
\begin{table}
\label{tab:Burgers_CT_inference_2}
\centering
\begin{tabular}{|c||ccc|}
\hline
\diagbox{Layers}{Neurons} & 10 & 20 & 40 \\ \hline\hline
2 & 7.4e-02 & 5.3e-02 & 1.0e-01 \\
4 & 3.0e-03 & 9.4e-04 & 6.4e-04 \\
6 & 9.6e-03 & 1.3e-03 & 6.1e-04 \\
8 & 2.5e-03 & 9.6e-04 & 5.6e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative $\mathbb{L}_{2}$ error between the predicted and the exact solution $u(t,x)$ for different number of hidden layers, and different number of neurons per layer. Here, the total number of training and collocation points is fixed to
$N_u = 100$ and $N_f=10000$, respectively.}
\end{table}
\subsection{Discrete Time Models}
\subsubsection{Example (Burgers' Equation)}
To highlight the key features of the discrete time representation we revisit the problem of data-driven solution of the Burgers' equation. The nonlinear operator in equation \eqref{eq:RungeKutta_inference_rearranged} is given by
\[
\mathcal{N}[u^{n+c_j}] = u^{n+c_j} u^{n+c_j}_x - (0.01/\pi)u^{n+c_j}_{xx}
\]
and the shared parameters of the neural networks \eqref{eq:RungeKutta_PU_prior_inference} and \eqref{eq:RungeKutta_PI_prior_inference} can be learned by minimizing the sum of squared errors
\[
SSE = SSE_n + SSE_b
\]
where
\[
SSE_n = \sum_{j=1}^q \sum_{i=1}^{N_n} |u^n_j(x^{n,i}) - u^{n,i}|^2,
\]
and
\[
SSE_b = \sum_{i=1}^q \left(|u^{n+c_i}(-1)|^2 + |u^{n+c_i}(1)|^2\right) + |u^{n+1}(-1)|^2 + |u^{n+1}(1)|^2.
\]
Here, $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ corresponds to the data at time $t^n$.
The Runge-Kutta scheme now allows us to infer the latent solution $u(t,x)$ in a sequential fashion. Starting from initial data $\{x^{n,i}, u^{n,i}\}_{i=1}^{N_n}$ at time $t^n$ and data at the domain boundaries $x = -1$ and $x = 1$ at time $t^{n+1}$, we can use the aforementioned loss functions to train the networks of \eqref{eq:RungeKutta_PU_prior_inference}, \eqref{eq:RungeKutta_PI_prior_inference}, and predict the solution at time $t^{n+1}$. A Runge-Kutta time-stepping scheme would then use this prediction as initial data for the next step and proceed to train again and predict $u(t^{n+2},x)$, $u(t^{n+3},x)$, etc., one step at a time.
In classical numerical analysis, these steps are usually confined to be small due to stability constraints for explicit schemes or computational complexity constrains for implicit formulations \cite{iserles2009first}.
These constraints become more severe as the total number of Runge-Kutta stages $q$ is increased, and, for most problems of practical interest, one needs to take thousands to millions of such steps until the solution is resolved up to a desired final time. In sharp contrast to classical methods, here we can employ implicit Runge-Kutta schemes with an arbitrarily large number of stages at effectively no extra cost.
\footnote{To be precise, it is only the number of parameters in the last layer of the neural network that increases linearly with the total number of stages.}
This enables us to take very large time steps while retaining stability and high predictive accuracy, therefore allowing us to resolve the entire spatio-temporal solution in a single step.
The result of applying this process to the Burgers' equation is presented in Figure~\ref{fig:Burgers_DT_inference}. For illustration purposes, we start with a set of $N_n=250$ initial data at $t = 0.1$, and employ a {\em physics informed neural network} induced by an implicit Runge-Kutta scheme with 500 stages to predict the solution at time $t=0.9$ in a single step. The theoretical error estimates for this scheme predict a temporal error accumulation of $\mathcal{O}(\Delta{t}^{2q})$ \cite{iserles2009first}, which in our case translates into an error way below machine precision, i.e., $\Delta{t}^q = 0.8^{1000} \approx 10^{-97}$. To our knowledge, this is the first time that an implicit Runge-Kutta scheme of that high-order has ever been used. Remarkably, starting from a smooth initial data at $t=0.1$ we can predict the nearly discontinuous solution at $t=0.9$ in a single time-step with a relative $\mathbb{L}_{2}$ error of $6.7 \cdot 10^{-4}$. This error is two orders of magnitude lower that the one reported in \cite{raissi2017numerical}, and it is entirely attributed to the neural network's capacity to approximate $u(t,x)$, as well as to the degree that the sum of squared errors loss allows interpolation of the training data. The network architecture used here consists of 4 layers with 50 neurons in each hidden layer.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_DT_inference.pdf}
\caption{{\em Burgers equation:} {\it Top:} Solution $u(t,x)$ along with the location of the initial training snapshot at $t=0.1$ and the final prediction snapshot at $t=0.9$. {\it Bottom:} Initial training data and final prediction at the snapshots depicted by the white vertical lines in the top panel. The relative $\mathbb{L}_{2}$ error for this case is $6.7 \cdot 10^{-4}$, with model training taking approximately 60 seconds on one NVIDIA Titan X GPU.}
\label{fig:Burgers_DT_inference}
\end{figure}
A detailed systematic study to quantify the effect of different network architectures is presented in Table~\ref{tab:Burgers_DT_inference_2}. By keeping the number of Runge-Kutta stages fixed to 500 and the time-step size to $\Delta{t}=0.8$, we have varied the number of hidden layers and the number of neurons per layer, and monitored the resulting relative $\mathbb{L}_{2}$ error for the predicted solution at time $t=0.9$. Evidently, as the neural network capacity is increased the predictive accuracy is enhanced.
\begin{table}
\label{tab:Burgers_DT_inference_2}
\centering
\begin{tabular}{|c||ccc|}
\hline
\diagbox{Layers}{Neurons} & 10 & 25 & 50 \\ \hline\hline
1 & 4.1e-02 & 4.1e-02 & 1.5e-01 \\
2 & 2.7e-03 & 5.0e-03 & 2.4e-03 \\
3 & 3.6e-03 & 1.9e-03 & 9.5e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative final prediction error measure in the $\mathbb{L}_{2}$ norm for different number of hidden layers and neurons in each layer. Here, the number of Runge-Kutta stages is fixed to 500 and the time-step size to $\Delta{t}=0.8$.}
\end{table}
The key parameters controlling the performance of our discrete time algorithm are the total number of Runge-Kutta stages $q$ and the time-step size $\Delta{t}$. In Table~\ref{tab:Burgers_DT_inference_1} we summarize the results of an extensive systematic study where we fix the network architecture to 4 hidden layers with 50 neurons per layer, and vary the number of Runge-Kutta stages $q$ and the time-step size $\Delta{t}$. Specifically, we see how cases with low numbers of stages fail to yield accurate results when the time-step size is large. For instance, the case $q=1$ corresponding to the classical trapezoidal rule, and the case $q=2$ corresponding to the $4^{\text{th}}$-order Gauss-Legendre method, cannot retain their predictive accuracy for time-steps larger than 0.2, thus mandating a solution strategy with multiple time-steps of small size. On the other hand, the ability to push the number of Runge-Kutta stages to 32 and even higher allows us to take very large time steps, and effectively resolve the solution in a single step without sacrificing the accuracy of our predictions. Moreover, numerical stability is not sacrificed either as implicit Runge-Kutta is the only family of time-stepping schemes that remain A-stable regardless for their order, thus constituting them ideal for stiff problems \cite{iserles2009first}. These properties are previously unheard-of for an algorithm of such implementation simplicity, and illustrate one of the key highlights of our discrete time approach.
\begin{table}
\label{tab:Burgers_DT_inference_1}
\centering
\begin{tabular}{|l||cccc|}
\hline
\diagbox{$q$}{$\Delta{t}$} & 0.2 & 0.4 & 0.6 & 0.8 \\ \hline\hline
1 & 3.5e-02 & 1.1e-01 & 2.3e-01 & 3.8e-01 \\
2 & 5.4e-03 & 5.1e-02 & 9.3e-02 & 2.2e-01 \\
4 & 1.2e-03 & 1.5e-02 & 3.6e-02 & 5.4e-02 \\
8 & 6.7e-04 & 1.8e-03 & 8.7e-03 & 5.8e-02 \\
16 & 5.1e-04 & 7.6e-02 & 8.4e-04 & 1.1e-03 \\
32 & 7.4e-04 & 5.2e-04 & 4.2e-04 & 7.0e-04 \\
64 & 4.5e-04 & 4.8e-04 & 1.2e-03 & 7.8e-04 \\
100 & 5.1e-04 & 5.7e-04 & 1.8e-02 & 1.2e-03 \\
500 & 4.1e-04 & 3.8e-04 & 4.2e-04 & 8.2e-04 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Relative final prediction error measure in the $\mathbb{L}_{2}$ norm for different number of Runge-Kutta stages $q$ and time-step sizes $\Delta{t}$. Here, the network architecture is fixed to 4 hidden layers with 50 neurons in each layer.}
\end{table}
\subsection{Continuous Time Models}
\subsubsection{Example (Burgers' Equation)}
As a first example, let us again consider the Burgers' equation. In one space dimension the equation reads as
\begin{eqnarray}
&& u_t + \lambda_1 u u_x - \lambda_2 u_{xx} = 0
\end{eqnarray}
Let us define $f(t,x)$ to be given by
\[
f := u_t + \lambda_1 u u_x - \lambda_2 u_{xx},
\]
and proceed by approximating $u(t,x)$ by a deep neural network. This will result in the \emph{physics informed neural network} $f(t,x)$. The shared parameters of the neural networks $u(t,x)$ and $f(t,x)$ along with the parameters $\lambda = (\lambda_1, \lambda_2)$ of the differential operator can be learned by minimizing the mean squared error loss
\begin{equation}\label{eq:MSE_Burgers_CT_inference}
MSE = MSE_u + MSE_f,
\end{equation}
where
\[
MSE_u = \frac{1}{N}\sum_{i=1}^{N} |u(t^i_u,x_u^i) - u^i|^2,
\]
and
\[
MSE_f = \frac{1}{N}\sum_{i=1}^{N}|f(t_f^i,x_f^i)|^2.
\]
Here, $\{t_u^i, x_u^i, u^i\}_{i=1}^{N}$ denote the training data on $u(t,x)$. The loss $MSE_u$ corresponds to the training data on $u(t,x)$ while $MSE_f$ enforces the structure imposed by equation \eqref{eq:Burgers} at a finite set of collocation points, whose number and location is taken to be the same with the training data.
To illustrate the effectiveness of our approach we have created a training data-set by randomly generating $N = 2000$ points across the entire spatio-temporal domain from the exact solution corresponding to $\lambda_1 = 1.0$ and $\lambda_2 = 0.01/\pi$. The locations of the training points are illustrated in the top panel of Figure~\ref{fig:Burgers_CT_identification}.
This data-set is then used to train a 9-layer deep neural network with 20 neurons per hidden layer by minimizing the mean square error loss of \eqref{eq:MSE_Burgers_CT_inference} using the L-BFGS optimizer \cite{liu1989limited}. Upon training, the networks parameters are calibrated to predict the entire solution $u(t,x)$, as well as the unknown parameters $\lambda = (\lambda_1, \lambda_2)$ that define the underlying dynamics. A visual assessment of the predictive accuracy of the {\em physics informed neural network} is given at the middle and bottom panels of Figure~\ref{fig:Burgers_CT_identification}. The network is able to identify the underlying partial differential equation with remarkable accuracy, even in the case which the scattered training data is corrupted with 1\% uncorrelated noise.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_CT_identification.pdf}
\caption{{\em Burgers equation:} {\it Top:} Predicted solution $u(t,x)$ along with the training data. {\it Middle:} Comparison of the predicted and exact solutions corresponding to the three temporal snapshots depicted by the dashed vertical lines in the top panel. {\it Bottom:} Correct partial differential equation along with the identified one obtained by learning $\lambda_1, \lambda_2$.}
\label{fig:Burgers_CT_identification}
\end{figure}
To further scrutinize the performance of our algorithm we have performed a systematic study with respect to the total number of training data, the noise corruption levels, and the neural network architecture. The results as summarized in Tables~\ref{tab:Burgers_CT_identification_1} and~\ref{tab:Burgers_CT_identification_2}. The key observation here is that the proposed methodology appears to be very robust with respect to noise levels in the data, and yield a reasonable identification accuracy even for noise corruption up to 10\%. This enhanced robustness seems to greatly outperform competing approaches using Gaussian process regression as previously reported in \cite{raissi2017hidden}, as well as approaches relying on sparse regression that require relatively clean data for accurately computing numerical gradients \cite{brunton2016discovering}.
\begin{table}
\label{tab:Burgers_CT_identification_1}
\centering
\begin{tabular}{|l||cccc||cccc|} \hline
& \multicolumn{4}{c||}{\% error in $\lambda_1$} & \multicolumn{4}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{$N_u$}{noise} & 0\% & 1\% & 5\% & 10\% & 0\% & 1\% & 5\% & 10\% \\ \hline\hline
500 & 0.131 & 0.518 & 0.118 & 1.319 & 13.885 & 0.483 & 1.708 & 4.058 \\
1000 & 0.186 & 0.533 & 0.157 & 1.869 & 3.719 & 8.262 & 3.481 & 14.544 \\
1500 & 0.432 & 0.033 & 0.706 & 0.725 & 3.093 & 1.423 & 0.502 & 3.156 \\
2000 & 0.096 & 0.039 & 0.190 & 0.101 & 0.469 & 0.008 & 6.216 & 6.391 \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different number of training data $N_u$ corrupted by different noise levels. Here, the neural network architecture is kept fixed to 9 layers and 20 neurons per layer.}
\end{table}
\begin{table}
\label{tab:Burgers_CT_identification_2}
\centering
\begin{tabular}{|c||ccc||ccc|} \hline
& \multicolumn{3}{c||}{\% error in $\lambda_1$} & \multicolumn{3}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{Layers}{Neurons} & 10 & 20 & 40 & 10 & 20 & 40 \\ \hline\hline
2 & $11.696$ & $2.837$ & $1.679$ & $103.919$ & $67.055$ & $49.186$ \\
4 & $0.332$ & $0.109$ & $0.428$ & $4.721$ & $1.234$ & $6.170$ \\
6 & $0.668$ & $0.629$ & $0.118$ & $3.144$ & $3.123$ & $1.158$ \\
8 & $0.414$ & $0.141$ & $0.266$ & $8.459$ & $1.902$ & $1.552$ \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different number of hidden layers and neurons per layer. Here, the training data is considered to be noise-free and fixed to $N = 2000$.}
\end{table}
\subsection{Discrete Time Models}
\subsubsection{Example (Burgers' Equation)}
Let us again illustrate the key features of this method through the lens of the Burgers' equation. Recall the equation's form
\begin{equation}\label{eq:Burgers_DT_identification}
u_t + \lambda_1 u u_x - \lambda_2 u_xx = 0,
\end{equation}
and notice that the nonlinear operator in equation \eqref{eq:RungeKutta_identification_rearranged} is given by
\[
\mathcal{N}[u^{n+c_j}] = \lambda_1 u^{n+c_j} u^{n+c_j}_x - \lambda_2 u^{n+c_j}_{xx}.
\]
Given merely two training data snapshots, the shared parameters of the neural networks along with the parameters $\lambda = (\lambda_1, \lambda_2)$ can be learned by minimizing the sum of squared errors \eqref{eq:SSE_identification}. Here, we have created a training data-set comprising of $N_n=199$ and $N_{n+1}=201$ spatial points by randomly sampling the exact solution at time instants $t^n=0.1$ and $t^{n+1}=0.9$, respectively. The training data, along with the predictions of the trained network, are shown in the top and middle panel of Figure~\ref{fig:Burgers_DT_identification}. The neural network architecture used here includes 4 hidden layers with 50 neurons each, while the number of Runge-Kutta stages is empirically chosen to yield a temporal error accumulation of the order of machine precision $\epsilon$ by setting
\footnote{This is motivated by the theoretical error estimates for implicit Runge-Kutta schemes suggesting a truncation error of $\mathcal{O}(\Delta{t}^{2q})$ \cite{iserles2009first}.}
\begin{equation}\label{eq:Runge-Kutta_stages}
q = 0.5\log{\epsilon}/\log(\Delta{t}),
\end{equation}
where the time-step for this example is $\Delta{t}=0.8$. The bottom panel of Figure~\ref{fig:Burgers_DT_identification} summarizes the identified parameters $\lambda = (\lambda_1, \lambda_2)$ for the cases of noise-free data, as well as noisy data with 1\% of uncorrelated noise corruption. For both cases, the proposed algorithm is able to learn the correct parameter values $\lambda_1=1.0$ and $\lambda_2=0.01/\pi$ with remarkable accuracy, despite the fact that the two data snapshots used for training are very far apart, and potentially describe different regimes of the underlying dynamics.
\begin{figure}
\includegraphics[width = 1.0\textwidth]{Burgers_DT_identification.pdf}
\caption{{\em Burgers equation:} {\it Top:} Predicted solution $u(t,x)$ along with the temporal locations of the two training snapshots. {\it Middle:} Training data and exact solution corresponding to the two temporal snapshots depicted by the dashed vertical lines in the top panel. {\it Bottom:} Correct partial differential equation along with the identified one obtained by learning $\lambda_1, \lambda_2$.}
\label{fig:Burgers_DT_identification}
\end{figure}
A further sensitivity analysis is performed to quantify the accuracy of our predictions with respect to the gap between the training snapshots $\Delta{t}$, the noise levels in the training data, and the {\em physics informed neural network} architecture. As shown in Table~\ref{tab:Burgers_DT_identification_1}, the proposed algorithm is quite robust to both $\Delta{t}$ and the noise corruption levels, and it consistently returns reasonable estimates for the unknown parameters. This robustness is mainly attributed to the flexibility of the underlying implicit Runge-Kutta scheme to admit an arbitrarily high number of stages, allowing the data snapshots to be very far apart in time, while not compromising the accuracy with which the nonlinear dynamics of Eq.~\eqref{eq:Burgers_DT_identification} are resolved. This is the key highlight of our discrete time formulation for identification problems, setting apart from competing approaches \cite{raissi2017hidden,brunton2016discovering}. Lastly, Table~\ref{tab:Burgers_DT_identification_2} presents the percentage error in the identified parameters, demonstrating the robustness of our estimates with respect to the underlying neural network architecture.
\begin{table}
\label{tab:Burgers_DT_identification_1}
\centering
\begin{tabular}{|l||cccc||cccc|} \hline
& \multicolumn{4}{c||}{\% error in $\lambda_1$} & \multicolumn{4}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{$\Delta{t}$}{noise} & 0\% & 1\% & 5\% & 10\% & 0\% & 1\% & 5\% & 10\% \\ \hline\hline
0.2 & $0.002$ & $0.435$ & $6.073$ & $3.273$ & $0.151$ & $4.982$ & $59.314$ & $83.969$ \\
0.4 & $0.001$ & $0.119$ & $1.679$ & $2.985$ & $0.088$ & $2.816$ & $8.396$ & $8.377$ \\
0.6 & $0.002$ & $0.064$ & $2.096$ & $1.383$ & $0.090$ & $0.068$ & $3.493$ & $24.321$ \\
0.8 & $0.010$ & $0.221$ & $0.097$ & $1.233$ & $1.918$ & $3.215$ & $13.479$ & $1.621$ \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different gap size $\Delta{t}$ between two different snapshots and for different noise levels.}
\end{table}
\begin{table}
\label{tab:Burgers_DT_identification_2}
\centering
\begin{tabular}{|c||ccc||ccc|} \hline
& \multicolumn{3}{c||}{\% error in $\lambda_1$} & \multicolumn{3}{c|} {\% error in $\lambda_2$} \\ \hline
\diagbox{Layers}{Neurons} & 10 & 25 & 50 & 10 & 25 & 50 \\ \hline\hline
1 & $1.868$ & $4.868$ & $1.960$ & $180.373$ & $237.463$ & $123.539$ \\
2 & $0.443$ & $0.037$ & $0.015$ & $29.474$ & $2.676$ & $1.561$ \\
3 & $0.123$ & $0.012$ & $0.004$ & $7.991$ & $1.906$ & $0.586$ \\
4 & $0.012$ & $0.020$ & $0.011$ & $1.125$ & $4.448$ & $2.014$ \\ \hline
\end{tabular}
\caption{{\em Burgers' equation:} Percentage error in the identified parameters $\lambda_1$ and $\lambda_2$ for different number of hidden layers and neurons in each layer.}
\end{table} | {"config": "arxiv", "file": "1711.10561/appendix.tex"} |
TITLE: Enumerating all partitions induced by Voronoi diagrams for clustering
QUESTION [3 upvotes]: a classical results by M. Inaba et al. in "Applications of Weighted Voronoi Diagrams and Randomization to Variance-Based k-CLustering" (Theorem 3) says
The number of Voronoi partitions of $n$ points by the Euclidean Voronoi diagram generated by $k$ points in $d$-dimensional space is $\mathcal{O}(n^{dk})$, and all the Voronoi partitions can be enumerated in $\mathcal{O}(n^{dk+1})$.
They basically divide the $d$-dimensional space into equivalence classes where two sets of center $\mu^1$ and $\mu^2$ are equivalent if they lead to the same Voronoi Diagram. Then they show that the arrangement of the $nk(k-1)/2$ surfaces
$$ \|x_i-\mu_j\|^2- \|x_i-\mu_{j'}\|^2 = 0 $$
for each point $x_i$ and two cluster center $\mu_j$ and $\mu_{j'}$ coincides with the equivalence relation from Voronoi partitions.
Next they argue that the combinatorial complexity of arrangements of $nk(k-1)/2$ constant-degree algebraic surfaces is bounded and that this implies and algorithm with running time $\mathcal{O}(n^{dk+1})$. Unfortunately, the cited source (Evaluation of combinatorial complexity for hypothesis spaces in learning theory with application, Master's Thesis, Department of Information Science, University of Toko, 1994) I cannot find anywhere. More precisely I cannot see the two following things.
Where can I find a bound for the combinatorial complexity of the arrangement of $nk(k-1)/2$ constant degree algebraic surfaces and
How does this help me to compute the arrangement?
For 2. I found the Bentley–Ottmann algorithm, however that only works for line segments and not degree 2 polynomials. How can this algorithm be generalized?
Thanks so much!
REPLY [0 votes]: For results on the combinatorial complexity of arrangements (and other related results) a good reference is Chapter 28 of The Handbook of Discrete and Computational Geometry.
In particular Theorem 28.1.4 specifies that the combinatorial complexity of an arrangement of $n$ constant-degree algebraic surfaces of dimension $d$ is $O(n^d)$.
Within the context of the paper $k$ is considered fixed, and therefore
$nk(k-1)/2 = O(n)$.
Further, in the paper the dimension of the vector space in the proof of Theorem 3 is $dk$.
Thus, the complexity of the surface arrangement in the $dk$-dimensional vector space is $O(n^{dk})$.
Note that if $k$ were not considered fixed (but still $O(n)$), then the complexity would have been
$O((nk(k-1)/2)^{dk}) = O((n^3)^{dk}) = O(n^{3dk})$.
This answers your first question.
As for 2, the Bentley-Ottman algorithm can be generalized for any
set of $x$-monotone curves (if the curves are not $x$-monotone, a splitting pre-process is required).
The main difference from the case of line segments is to allow for
more than a single intersection between curves and also for the possibility
of tangent intersections.
This means that when the sweep-line algorithm passes an intersection point, it needs to check whether the order of the curves in the sweep-line structure should be swapped (a non-tangential intersection) or maintained (a tangential intersection).
It also needs to check for the next intersection point between the adjacent curves, if it exists (since there can be more than a single intersection point).
CGAL-The Computational Geometry Algorithm Library, implements such a generic sweep-line algorithm in its arrangements package.
It uses its "Traits" mechanism to implement the above requirements with implementations on circle arcs, conic sections, polynomials of any degree, and more. See the CGAL package documentation for further reference. | {"set_name": "stack_exchange", "score": 3, "question_id": 321194} |
TITLE: how to find bases for subspace span of $\mathbb{R}^3$
QUESTION [0 upvotes]: I have searched the internet, but I found contradicting answers.
Must the number of bases be exactly 3 if the space is $\mathbb{R}^3$? What if there is 1 redundant vector in the span? Will the bases then be 2 now?
For example, for subspace span{(1,2,3), (4,5,6), (7,8,9)} in $\mathbb{R}^3$, am I right to say the bases can be {(1,2,3),(4,5,6)} or perhaps {(1,2,3), (7,8,9)}?
But wouldn't that mean there are 2 bases, must there be 3 bases in $\mathbb{R}^3$?
REPLY [0 votes]: If we take the set of vectors you wrote $\mathcal A= \Bigg\{\begin{pmatrix}1\\2\\3 \end{pmatrix},\begin{pmatrix}4\\5\\6 \end{pmatrix},\begin{pmatrix} 7\\8\\9\end{pmatrix}\Bigg\}$ we can observe that
$$M=\begin{pmatrix}1&4&7\\2&5&8\\3&6&9 \end{pmatrix}\sim\begin{pmatrix}1&4&7\\0&-3&-6\\0&0&0 \end{pmatrix}\implies\text{ rk}(M)=2$$
and this shows that the three vectors are not linearly independent. They span a vector space of dimension $2$.
For example $\mathbb R^3\ni\begin{pmatrix} 0\\0\\1\end{pmatrix}\notin \langle \begin{pmatrix}1\\2\\3 \end{pmatrix},\begin{pmatrix}4\\5\\6 \end{pmatrix}\rangle$, so $\mathcal A$ is not a basis for $\mathbb R^3$.
An example of basis for $\mathbb R^3$ is
$$\mathcal C=\{e_1,e_2,e_3\}=\Bigg\{\begin{pmatrix}1\\0\\0 \end{pmatrix},\begin{pmatrix}0\\1\\0 \end{pmatrix},\begin{pmatrix}0\\0\\1 \end{pmatrix} \Bigg\}$$
For definition a set of vector $\{v_1,\dots,v_n\}$ is a basis for a vector space $V[\mathbb K]$ if:
$(i)$ $\{v_1,\dots,v_n\}$ is a system of generators of $V$;
$(ii)$ $v_1,\dots,v_n$ are linearly independent vectors; | {"set_name": "stack_exchange", "score": 0, "question_id": 4003329} |
\usepackage{latexsym,amsfonts,amsmath,amssymb}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{sublemma}{Lemma}[theorem]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{question}[theorem]{Question}
\newtheorem{observation}[theorem]{Observation}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{subclaim}{Claim}[sublemma]
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}{Exercise}[section]
\def\Theorem #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{theorem}{\rm #2} #3\end{theorem}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Corollary #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{corollary}{\rm #2} #3\end{corollary}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Lemma #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{lemma}{\rm #2} #3\end{lemma}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\SubLemma #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{sublemma}{\rm #2} #3\end{sublemma}\else
\newtheorem{#1}{#1}[theorem]\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Question #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{question}{\rm #2} #3\end{question}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Observation #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{observation}{\rm #2} #3\end{observation}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Claim #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{claim}{\rm #2} #3\end{claim}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\SubClaim #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{subclaim}{\rm #2} #3\end{subclaim}\else
\newtheorem{#1}{#1}[sublemma]\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Conjecture #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{conjecture}{\rm #2} #3\end{conjecture}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Fact #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{fact}{\rm #2} #3\end{fact}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Definition #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{definition}{\rm #2} {\rm #3}\end{definition}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} {\rm #3}\end{#1}\fi}
\def\Remark #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{remark}{\rm #2} {\rm #3}\end{remark}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} {\rm #3}\end{#1}\fi}
\def\Example #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
\begin{example}{\rm #2} #3\end{example}\else
\newtheorem{#1}[theorem]{#1}\begin{#1}\label{#1}{\rm #2} #3\end{#1}\fi}
\def\Exercise #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt
{\footnotesize\begin{exercise}{\rm #2} {\rm #3}\end{exercise}}\else
\newtheorem{#1}[section]{#1}{\footnotesize\begin{#1}\label{#1}{\rm #2} {\rm #3}\end{#1}}\fi}
\def\QuietTheorem #1.#2 #3\par{\setbox1=\hbox{#1}\ifdim\wd1=0pt\proclaim{Theorem {\rm #2}}{#3}\else\proclaim{#1 {\rm #2}}{#3}\fi}
\newcommand{\proclaim}[2]{\smallskip\noindent{\bf #1} {\sl#2}\par\smallskip}
\def\Proclaim #1.#2 #3\par{\proclaim{#1 {\rm #2}}{#3}}
\newenvironment{proof}{\noindent}{\kern2pt\QEDbox\par\bigskip}
\def\Proof#1: {\setbox1=\hbox{#1}\ifdim\wd1=0pt\begin{proof}{\bf Proof: }\else\medskip\begin{proof}{\bf #1: }\fi}
\newcommand{\QED}{\end{proof}}
\def\BF#1.{{\bf #1.}}
\def\Abstract #1\par{\begin{quotation}{\singlespaced\footnotesize{\noindent{\bf Abstract.~}#1}}\end{quotation}}
\def\Title #1\par{\title{#1}\maketitle}
\def\Author #1\par{\author{#1}}
\def\Acknowledgement#1\par{\thanks{#1}}
\def\Chapter #1\par{\chapter{#1}}
\def\Section #1\par{\section{#1}}
\def\QuietSection #1\par{\section*{#1}}
\def\SubSection #1\par{\subsection{#1}}
\def\SubSubSection #1\par{\subsubsection{#1}}
\def\MidTitle #1\par{\bigskip\goodbreak\centerline{\small\bf #1}\bigskip\noindent}
\def\Margin #1\par{\marginpar{\tiny #1}}
\newcommand{\doublespaced}{\baselineskip=28pt}
\newcommand{\almostdoublespaced}{\baselineskip=23pt}
\newcommand{\singlespaced}{\baselineskip=15pt}
\def\bottomnote #1\par{{\renewcommand{\thefootnote}{}\footnotetext{#1}}}
\newcommand{\A}{{\mathbb A}}
\newcommand{\B}{{\mathbb B}}
\newcommand{\C}{{\mathbb C}}
\newcommand{\D}{{\mathbb C}}
\newcommand{\E}{{\mathbb E}}
\newcommand{\F}{{\mathbb F}}
\newcommand{\G}{{\mathbb G}}
\newcommand{\J}{{\mathbb J}}
\newcommand{\X}{{\mathbb X}}
\newcommand{\N}{{\mathbb N}}
\renewcommand{\P}{{\mathbb P}}
\newcommand{\Q}{{\mathbb Q}}
\def\R{{\mathbb R}}
\newcommand{\T}{{\mathbb T}}
\newcommand{\term}{{\!\scriptscriptstyle\rm term}}
\newcommand{\Dterm}{{D_{\!\scriptscriptstyle\rm term}}}
\newcommand{\Ftail}{{\F_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\Fotail}{{\F^0_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\ftail}{{f_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\fotail}{{f^0_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\Gtail}{{G_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\Gotail}{{G^0_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\Hterm}{{H_{\!\scriptscriptstyle\rm term}}}
\newcommand{\Ptail}{{\P_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\Potail}{{\P^0_{\!\scriptscriptstyle\rm tail}}}
\newcommand{\Pterm}{{\P_{\!\scriptscriptstyle\rm term}}}
\newcommand{\Qterm}{{\Q_{\!\scriptscriptstyle\rm term}}}
\newcommand{\Gterm}{{G_{\!\scriptscriptstyle\rm term}}}
\newcommand{\Rterm}{{\R_{\!\scriptscriptstyle\rm term}}}
\newcommand{\hterm}{{h_{\!\scriptscriptstyle\rm term}}}
\newcommand{\Cbar}{{\overline{C}}}
\newcommand{\Dbar}{{\overline{D}}}
\newcommand{\Fbar}{{\overline{F}}}
\newcommand{\Mbar}{{\overline{M}}}
\newcommand{\Nbar}{{\overline{N}}}
\newcommand{\Vbar}{{\overline{V}}}
\newcommand{\Xbar}{{\overline{X}}}
\newcommand{\jbar}{{\bar j}}
\newcommand{\Ptilde}{{\tilde\P}}
\newcommand{\Gtilde}{{\tilde G}}
\newcommand{\Qtilde}{{\tilde\Q}}
\newcommand{\Adot}{{\dot A}}
\newcommand{\Bdot}{{\dot B}}
\renewcommand{\Ddot}{{\dot D}}
\newcommand{\Pdot}{{\dot\P}}
\newcommand{\Qdot}{{\dot\Q}}
\newcommand{\qdot}{{\dot q}}
\newcommand{\Rdot}{{\dot\R}}
\newcommand{\Xdot}{{\dot X}}
\newcommand{\Ydot}{{\dot Y}}
\newcommand{\mudot}{{\dot\mu}}
\newcommand{\hdot}{{\dot h}}
\newcommand{\rdot}{{\dot r}}
\newcommand{\sdot}{{\dot s}}
\newcommand{\xdot}{{\dot x}}
\newcommand{\I}[1]{\mathop{\hbox{\sc i}_#1}}
\newcommand{\id}{\mathop{\hbox{\small id}}}
\newcommand{\one}{\mathop{1\hskip-3pt {\rm l}}}
\newfont{\msam}{msam10 at 12pt}
\newcommand{\from}{\mathbin{\vbox{\baselineskip=3pt\lineskiplimit=0pt
\hbox{.}\hbox{.}\hbox{.}}}}
\newcommand{\of}{\subseteq}
\newcommand{\ofnoteq}{\subsetneq}
\newcommand{\fo}{\supseteq}
\newcommand{\Set}[1]{\left\{\,{#1}\,\right\}}
\newcommand{\set}[1]{\{\,{#1}\,\}}
\newcommand{\singleton}[1]{\left\{{#1}\right\}}
\newcommand{\compose}{\circ}
\newcommand{\curlyelesub}{\Undertilde\prec}
\newcommand{\elesub}{\prec}
\newcommand{\eleequiv}{\equiv}
\newcommand{\muchgt}{\gg}
\newcommand{\muchlt}{\ll}
\newcommand{\inverse}{{-1}}
\newcommand{\jump}{{\!\triangledown}}
\newcommand{\Jump}{{\!\blacktriangledown}}
\def\ilt{<_{\infty}}
\def\ileq{\leq_{\infty}}
\def\iequiv{\equiv_{\infty}}
\newcommand{\Tequiv}{\equiv_T}
\newcommand{\dom}{\mathop{\rm dom}}
\newcommand{\dirlim}{\mathop{\rm dirlim}}
\newcommand{\ran}{\mathop{\rm ran}}
\newcommand{\add}{\mathop{\rm add}}
\newcommand{\coll}{\mathop{\rm coll}}
\newcommand{\cof}{\mathop{\rm cof}}
\newcommand{\Cof}{\mathop{\rm Cof}}
\newcommand{\Add}{\mathop{\rm Add}}
\newcommand{\Aut}{\mathop{\rm Aut}}
\newcommand{\Inn}{\mathop{\rm Inn}}
\newcommand{\Coll}{\mathop{\rm Coll}}
\newcommand{\Ult}{\mathop{\rm Ult}}
\newcommand{\Th}{\mathop{\rm Th}}
\newcommand{\con}{\mathop{\hbox{\sc con}}}
\newcommand{\image}{\mathbin{\hbox{\tt\char'42}}}
\newcommand{\plus}{{+}}
\newcommand{\plusplus}{{{+}{+}}}
\newcommand{\plusplusplus}{{{+}{+}{+}}}
\newcommand{\restrict}{\upharpoonright}
\newcommand{\satisfies}{\models}
\newcommand{\forces}{\Vdash}
\newcommand{\proves}{\vdash}
\newcommand{\possible}{\mathop{\raisebox{-1pt}{$\Diamond$}}}
\newcommand{\necessary}{\mathop{\raisebox{-1pt}{$\Box$}}}
\newcommand{\cross}{\times}
\newcommand{\concat}{\mathbin{{}^\smallfrown}}
\newcommand{\converges}{\downarrow}
\newcommand{\diverges}{\uparrow}
\newcommand{\union}{\cup}
\newcommand{\Union}{\bigcup}
\newcommand{\intersect}{\cap}
\newcommand{\Intersect}{\bigcap}
\newcommand{\Pforces}{\forces_{\P}}
\newcommand{\into}{\hookrightarrow}
\newcommand{\trianglelt}{\lhd}
\newcommand{\nottrianglelt}{\ntriangleleft}
\newcommand{\tlt}{\triangle}
\newcommand{\LaverDiamond}{\mathop{\hbox{\line(0,1){10}\line(1,0){8}\line(-4,5){8}}\hskip 1pt}\nolimits}
\newcommand{\LD}{\LaverDiamond}
\newcommand{\LDLD}{\mathop{\hbox{\,\line(0,1){8}\!\line(0,1){10}\line(1,0){8}\line(-4,5){8}}\hskip 1pt}\nolimits}
\newcommand{\LDminus}{\mathop{\LaverDiamond^{\hbox{\!\!-}}}}
\newcommand{\LDwc}{\LD^{\hbox{\!\!\tiny wc}}}
\newcommand{\LDunf}{\LD^{\hbox{\!\!\tiny unf}}}
\newcommand{\LDind}{\LD^{\hbox{\!\!\tiny ind}}}
\newcommand{\LDthetaunf}{\LD^{\hbox{\!\!\tiny $\theta$-unf}}}
\newcommand{\LDsunf}{\LD^{\hbox{\!\!\tiny sunf}}}
\newcommand{\LDthetasunf}{\LD^{\hbox{\!\!\tiny $\theta$-sunf}}}
\newcommand{\LDmeas}{\LD^{\hbox{\!\!\tiny meas}}}
\newcommand{\LDstr}{\LD^{\hbox{\!\!\tiny strong}}}
\newcommand{\LDsuperstrong}{\LD^{\hbox{\!\!\tiny superstrong}}}
\newcommand{\LDram}{\LD^{\hbox{\!\!\tiny Ramsey}}}
\newcommand{\LDstrc}{\LD^{\hbox{\!\!\tiny str compact}}}
\newcommand{\LDsc}{\LD^{\hbox{\!\!\tiny sc}}}
\newcommand{\LDext}{\LD^{\hbox{\!\!\tiny ext}}}
\newcommand{\LDahuge}{\LD^{\hbox{\!\!\tiny ahuge}}}
\newcommand{\LDlambdaahuge}{\LD^{\hbox{\!\!\tiny $\lambda$-ahuge}}}
\newcommand{\LDsahuge}{\LD^{\hbox{\!\!\tiny super ahuge}}}
\newcommand{\LDhuge}{\LD^{\hbox{\!\!\tiny huge}}}
\newcommand{\LDlambdahuge}{\LD^{\hbox{\!\!\tiny $\lambda$-huge}}}
\newcommand{\LDshuge}{\LD^{\hbox{\!\!\tiny superhuge}}}
\newcommand{\LDnhuge}{\LD^{\hbox{\!\!\tiny $n$-huge}}}
\newcommand{\LDlambdanhuge}{\LD^{\hbox{\!\!\tiny $\lambda$ $n$-huge}}}
\newcommand{\LDsnhuge}{\LD^{\hbox{\!\!\tiny super $n$-huge}}}
\newcommand{\LDthetasc}{\LD^{\hbox{\!\!\tiny $\theta$-sc}}}
\newcommand{\LDkappasc}{\LD^{\hbox{\!\!\tiny $\kappa$-sc}}}
\newcommand{\LDthetastr}{\LD^{\hbox{\!\!\tiny $\theta$-strong}}}
\newcommand{\LDthetastrc}{\LD^{\hbox{\!\!\tiny $\theta$-str compact}}}
\newcommand{\LDstar}{\LD^{\hbox{\!\!\tiny$\star$}}}
\newcommand{\smalllt}{\mathrel{\mathchoice{\raise2pt\hbox{$\scriptstyle<$}}{\raise1pt\hbox{$\scriptstyle<$}}{\scriptscriptstyle<}{\scriptscriptstyle<}}}
\newcommand{\smallleq}{\mathrel{\mathchoice{\raise2pt\hbox{$\scriptstyle\leq$}}{\raise1pt\hbox{$\scriptstyle\leq$}}{\scriptscriptstyle\leq}{\scriptscriptstyle\leq}}}
\newcommand{\ltomega}{{{\smalllt}\omega}}
\newcommand{\leqomega}{{{\smallleq}\omega}}
\newcommand{\ltkappa}{{{\smalllt}\kappa}}
\newcommand{\leqkappa}{{{\smallleq}\kappa}}
\newcommand{\ltalpha}{{{\smalllt}\alpha}}
\newcommand{\leqalpha}{{{\smallleq}\alpha}}
\newcommand{\leqgamma}{{{\smallleq}\gamma}}
\newcommand{\leqlambda}{{{\smallleq}\lambda}}
\newcommand{\ltlambda}{{{\smalllt}\lambda}}
\newcommand{\ltgamma}{{{\smalllt}\gamma}}
\newcommand{\leqeta}{{{\smallleq}\eta}}
\newcommand{\lteta}{{{\smalllt}\eta}}
\newcommand{\leqxi}{{{\smallleq}\xi}}
\newcommand{\ltxi}{{{\smalllt}\xi}}
\newcommand{\leqzeta}{{{\smallleq}\zeta}}
\newcommand{\ltzeta}{{{\smalllt}\zeta}}
\newcommand{\leqtheta}{{{\smallleq}\theta}}
\newcommand{\lttheta}{{{\smalllt}\theta}}
\newcommand{\leqbeta}{{{\smallleq}\beta}}
\newcommand{\leqdelta}{{{\smallleq}\delta}}
\newcommand{\ltdelta}{{{\smalllt}\delta}}
\newcommand{\ltbeta}{{{\smalllt}\beta}}
\newcommand{\leqSigma}{{{\smallleq}\Sigma}}
\newcommand{\ltSigma}{{{\smalllt}\Sigma}}
\newcommand{\Card}[1]{{\left|#1\right|}}
\newcommand{\card}[1]{{|#1|}}
\newcommand{\boolval}[1]{\mathopen{\lbrack\!\lbrack}\,#1\,\mathclose{\rbrack\!\rbrack}}
\def\[#1]{\boolval{#1}}
\newcommand{\gcode}[1]{{}^\ulcorner#1{}^\urcorner}
\newcommand{\UnderTilde}[1]{{\setbox1=\hbox{$#1$}\baselineskip=0pt\vtop{\hbox{$#1$}\hbox to\wd1{\hfil$\sim$\hfil}}}{}}
\newcommand{\Undertilde}[1]{{\setbox1=\hbox{$#1$}\baselineskip=0pt\vtop{\hbox{$#1$}\hbox to\wd1{\hfil$\scriptstyle\sim$\hfil}}}{}}
\newcommand{\undertilde}[1]{{\setbox1=\hbox{$#1$}\baselineskip=0pt\vtop{\hbox{$#1$}\hbox to\wd1{\hfil$\scriptscriptstyle\sim$\hfil}}}{}}
\newcommand{\UnderdTilde}[1]{{\setbox1=\hbox{$#1$}\baselineskip=0pt\vtop{\hbox{$#1$}\hbox to\wd1{\hfil$\approx$\hfil}}}{}}
\newcommand{\Underdtilde}[1]{{\setbox1=\hbox{$#1$}\baselineskip=0pt\vtop{\hbox{$#1$}\hbox to\wd1{\hfil\scriptsize$\approx$\hfil}}}{}}
\newcommand{\underdtilde}[1]{{\baselineskip=0pt\vtop{\hbox{$#1$}\hbox{\hfil$\scriptscriptstyle\approx$\hfil}}}{}}
\newcommand{\st}{\mid}
\renewcommand{\th}{{\hbox{\scriptsize th}}}
\newcommand{\Iff}{\mathrel{\leftrightarrow}}
\newcommand{\minus}{\setminus}
\newcommand{\iso}{\cong}
\def\<#1>{\langle#1\rangle}
\newcommand{\ot}{\mathop{\rm ot}\nolimits}
\newcommand{\QEDbox}{\fbox{}}
\newcommand{\cp}{\mathop{\rm cp}}
\newcommand{\TC}{\mathop{\hbox{\sc tc}}}
\newcommand{\ORD}{\mathop{\hbox{\sc ord}}}
\newcommand{\REG}{\mathop{\hbox{\sc reg}}}
\newcommand{\COF}{\mathop{\hbox{\sc cof}}}
\newcommand{\INACC}{\mathop{\hbox{\sc inacc}}}
\newcommand{\CCC}{{\hbox{\sc ccc}}}
\newcommand{\WO}{\mathop{\hbox{\sc wo}}}
\newcommand{\ZFC}{\hbox{\sc zfc}}
\newcommand{\ZF}{\hbox{\sc zf}}
\newcommand{\CH}{\hbox{\sc ch}}
\newcommand{\SH}{\hbox{\sc sh}}
\newcommand{\GCH}{\hbox{\sc gch}}
\newcommand{\SCH}{\hbox{\sc sch}}
\newcommand{\AC}{\hbox{\sc ac}}
\newcommand{\AD}{\hbox{\sc ad}}
\newcommand{\NP}{\mathop{\hbox{\it NP}}\nolimits}
\newcommand{\coNP}{\mathop{\hbox{\rm co-\!\it NP}}\nolimits}
\newcommand{\PD}{\hbox{\sc pd}}
\newcommand{\MA}{\hbox{\sc ma}}
\newcommand{\WA}{\hbox{\sc wa}}
\newcommand{\MP}{\hbox{\sc mp}}
\newcommand{\HOD}{\hbox{\sc hod}}
\newcommand{\MPtilde}{\UnderTilde{\MP}}
\newcommand{\MPccc}{{\MP_{\scriptsize\!\CCC}}}
\newcommand{\ccc}{{\rm\!ccc}}
\newcommand{\PA}{\hbox{\sc pa}}
\newcommand{\inacc}{\hbox{\sc inacc}}
\newcommand{\omegaCK}{{\omega_1^{\rm ck}}}
\def\col#1#2#3{\hbox{\vbox{\baselineskip=0pt\parskip=0pt\cell#1\cell#2\cell#3}}}
\newcommand{\cell}[1]{\boxit{\hbox to 17pt{\strut\hfil$#1$\hfil}}}
\newcommand{\head}[2]{\lower2pt\vbox{\hbox{\strut\footnotesize\it\hskip3pt#2}\boxit{\cell#1}}}
\newcommand{\boxit}[1]{\setbox4=\hbox{\kern2pt#1\kern2pt}\hbox{\vrule\vbox{\hrule\kern2pt\box4\kern2pt\hrule}\vrule}}
\newcommand{\Col}[3]{\hbox{\vbox{\baselineskip=0pt\parskip=0pt\cell#1\cell#2\cell#3}}}
\newcommand{\tapenames}{\raise 5pt\vbox to .7in{\hbox to .8in{\it\hfill input: \strut}\vfill\hbox to
.8in{\it\hfill scratch: \strut}\vfill\hbox to .8in{\it\hfill output: \strut}}}
\newcommand{\Head}[4]{\lower2pt\vbox{\hbox to25pt{\strut\footnotesize\it\hfill#4\hfill}\boxit{\Col#1#2#3}}}
\newcommand{\Dots}{\raise 5pt\vbox to .7in{\hbox{\ $\cdots$\strut}\vfill\hbox{\ $\cdots$\strut}\vfill\hbox{\
$\cdots$\strut}}}
\renewcommand{\dots}{\raise5pt\hbox{\ $\cdots$}}
\newcommand{\factordiagramup}[6]{$$\begin{array}{ccc}
#1&\raise3pt\vbox{\hbox to60pt{\hfill$\scriptstyle
#2$\hfill}\vskip-6pt\hbox{$\vector(4,0){60}$}}\\ \vbox
to30pt{}&\raise22pt\vtop{\hbox{$\vector(4,-3){60}$}\vskip-22pt\hbox
to60pt{\hfill$\scriptstyle #4\qquad$\hfill}}
&\ \ \lower22pt\hbox{$\vector(0,3){45}$}\ {\scriptstyle #5}\\
\vbox to15pt{}&\\
\end{array}$$}
\newcommand{\factordiagram}[6]{$$\begin{array}{ccc}
#1&&\\ \ \ \raise22pt\hbox{$\vector(0,-3){45}$}\ {\scriptstyle #2}
&\raise22pt\hbox{$\vector(2,-1){90}$}\raise5pt\llap{$\scriptstyle#3$\qquad\quad}&\vbox
to25pt{}\\ #4&\raise3pt\vbox{\hbox to90pt{\hfill$\scriptstyle
#5$\hfill}\vskip-6pt\hbox{$\vector(4,0){90}$}}\\
\end{array}$$}
\newcommand{\df}{\it}
\hyphenation{su-per-com-pact-ness}\hyphenation{La-ver} | {"config": "arxiv", "file": "math0307229/LaTeXMacros.tex"} |
\begin{document}
\begin{abstract}
We study preferential attachment mechanisms in random graphs that are parameterized by (i) a constant bias affecting the degree-biased distribution on the vertex set and (ii) the distribution of times at which new vertices are created by the model. The class of random graphs so defined admits a representation theorem reminiscent of residual allocation, or ``stick-breaking'' schemes. We characterize how the vertex arrival times affect the asymptotic degree distribution, and relate the latter to neutral-to-the-left processes. Our random graphs generate edges ``one end at a time'', which sets up a one-to-one correspondence between random graphs and random partitions of natural numbers; via this map, our representation induces a result on (not necessarily exchangeable) random partitions that generalizes a theorem of Griffiths and Span\'o. A number of examples clarify how the class intersects with several known random graph models.
\end{abstract}
\begin{frontmatter}
\title{Preferential attachment and vertex arrival times}
\author{\fnms{Benjamin }\snm{Bloem-Reddy}\ead[label=e1]{benjamin.bloem-reddy@stats.ox.ac.uk}}
\and
\author{\fnms{Peter\ }\snm{Orbanz}\corref{}\ead[label=e2]{porbanz@stat.columbia.edu}}
\affiliation{University of Oxford and Columbia University}
\address{Department of Statistics\\
24--29 St. Giles'\\
Oxford OX1 3LB, UK\\
\printead{e1}
}
\address{Department of Statistics\\
1255 Amsterdam Avenue\\
New York, NY 10027, USA\\
\printead{e2}
}
\end{frontmatter}
\def\kword#1{\textbf{#1}}
\def\xspace{\mathbf{X}}
\def\borel{\mathcal{B}}
\def\mean{\mathbb{E}}
\def\condind{{\perp\!\!\!\perp}}
\def\ie{i.e.\ }
\def\eg{e.g.\ }
\def\equdist{\stackrel{\text{\rm\tiny d}}{=}}
\def\equas{=_{\text{\rm\tiny a.s.}}}
\def\braces#1{{\lbrace #1 \rbrace}}
\def\bigbraces#1{{\bigl\lbrace #1 \bigr\rbrace}}
\def\Bigbraces#1{{\Bigl\lbrace #1 \Bigr\rbrace}}
\def\simiid{\sim_{\mbox{\tiny iid}}}
\def\Law{\mathcal{L}}
\def\iid{i.i.d.\ }
\def\ind#1{\text{\tiny #1}}
\newcommand{\argdot}{{\,\vcenter{\hbox{\tiny$\bullet$}}\,}}
\renewcommand\labelitemi{\raisebox{0.35ex}{\tiny$\bullet$}}
\def\Teven{\mathcal{T}_{\text{\tiny\rm 2}}}
\def\map{\Phi}
\def\DB{\text{\rm DB}}
\def\G{\mathbb{G}}
\def\T{\mathbb{T}}
\def\P{\mathbb{P}}
\def\R{\mathbb{R}}
\def\Barabasi{Barab\'asi}
\def\bbE{\mathbb{E}}
\def\indicator{\mathds{1}}
\def\Bernoulli{\text{\rm Bernoulli}}
\def\PoissonPlus{\text{\rm Poisson}_{+}}
\def\Poisson{\text{\rm Poisson}}
\def\Uniform{\text{\rm Uniform}}
\def\GammaDist{\text{\rm Gamma}}
\def\BetaDist{\text{\rm Beta}}
\def\NBPlus{\text{\rm NB}_{+}}
\def\NB{\text{\rm NB}}
\def\Geom{\text{\rm Geom}}
\def\CRP{\text{\rm CRP}}
\def\EGP{\text{\rm EGP}}
\def\MittagLeffler{\text{\rm ML}}
\def\GGdist{\text{\rm GGa}}
\def\gvar{\mathcal{G}}
\def\bvar{\mathcal{B}}
\def\mlvar{\mathcal{M}}
\def\egc{e.g.}
\def\ct{D}
\def\randperm{\tilde{\sigma}}
\def\randPerm{\tilde{\Sigma}}
\def\ordct{\ct^{\downarrow}}
\def\iatime{\delta}
\def\iaTime{\Delta}
\def\iaInf{\iatime_{1:\infty}}
\def\IaInf{\iaTime_{1:\infty}}
\def\atime{t}
\def\aTime{T}
\def\atimeInf{\atime_{1:\infty}}
\def\aTimeInf{\aTime_{1:\infty}}
\def\aDist{\Lambda}
\def\aDistiid{\tau}
\def\on{\mid}
\def\bbN{\mathbb{N}}
\def\bbP{\mathbb{P}}
\def\sparsity{\varepsilon}
\newcommand{\widesim}[1][1.5]{
\mathrel{\scalebox{#1}[1]{$\sim$}}
}
\newcommand{\limscale}[2]{\overset{\scriptscriptstyle{#1 \uparrow #2}}{\widesim[1.25]}}
The term \emph{preferential attachment} describes generative mechanisms for random graph models
that select the terminal vertices of a new edge with probability biased by the vertex degrees.
These models come in many shapes and guises
\citep[\egc][]{Barabasi:Albert:1999,Berger:etal:2014,Hofstad:2016,Pekoz:Rollin:Ross:2017}, and are
often motivated by their ability to generate (and hence explain) power law distributions.
Degree-biased selection is a form of size bias \citep{Arratia:Goldstein:Kochman:2013:1},
and this interplay between size-biasing and power laws is not confined to random graph models, but
also encountered in random partitions, which are used in population genetics,
machine learning, and other fields \citep[\egc][]{Pitman:2006,deBlasi:etal:2015,Broderick:Jordan:Pitman:2012}.
In partition models, power laws arise as heavy-tailed distributions of block sizes.
Size-biased sampling as such, however, need not result in a power laws:
The most basic form of
size-biased sampling from a countable number of categories is a type of P\'olya urn with
an unbounded number of colors, or, equivalently, the one-parameter Chinese
restaurant process \citep{Pitman:2006}. It does not generate a power law.
To obtain power laws, plain size-biased sampling can be modified in two ways:
\begin{enumerate}
\renewcommand\labelenumi{(\roman{enumi})}
\item By biasing the size-biased probability of each category downward.
\item By forcing new categories to arrive at a faster rate than that induced by plain size-biased sampling.
\end{enumerate}
An example is the two-parameter Chinese restaurant process with parameter $(\alpha,\theta)$,
which modifies the Chinese restaurant process with a single parameter $\theta$---a model that corresponds
to plain size-biased sampling---by effectively (i) damping the size bias by a constant offset $\alpha$, and (ii) increasing the rate at which new categories arrive.
An example of (ii) is the \Barabasi--Albert random graph model, in which vertices
arrive at fixed, constant time intervals; if these times were instead determined at random by size-biased sampling, intervals would grow over time.
The premise of this work is to study preferential attachment mechanisms in random graph models
by explicitly controlling these two effects:
\begin{enumerate}
\renewcommand\labelenumi{(\roman{enumi})}
\item The attachment probability, proportional to the degree $\deg_k$ of each vertex $k$, is biased
by a constant offset as ${\deg_k-\,\alpha}$.
\item Vertex arrival times are taken into account, by explicitly conditioning the generation process on a (random or non-random) sequence of given times.
\end{enumerate}
The result is a class of random graphs parametrized by the offset $\alpha$ and a sequence $t$ of vertex
arrival times. Each such $(\alpha,t)$-graph can be generalized by randomizing the arrival times,
\ie to an $(\alpha,T)$-graph for a random sequence $T$.
Preferential attachment models that constantly bias the attachment probability
have been thoroughly studied \citep{Mori:2005,Hofstad:2016}. We consider the range ${\alpha\in(-\infty,1)}$,
and the case ${\alpha\in[0,1)}$ turns out to be of particular interest.
The effects (i) and (ii) are not independent, and in models with a suitable exchangeability
property, the effect of $\alpha$ can equivalently be induced by controlling the law of the arrival times.
In this sense, (ii) can provide more control over the model than (i).
\cref{sec:representation} characterizes $(\alpha,T)$-graphs by a representation in terms of independent beta random variables, reminiscent of stick-breaking
constructions of random partitions.
\cref{sec:graphs:urns} considers implications for random partitions and urns.
${(\alpha,T)}$-graphs generate edges ``one end at a time'', updating the vertex degrees after each step. Although such a scheme differs from the usual preferential attachment model, it is similar to so-called ``sequential'' versions considered by \cite{Berger:etal:2014,Pekoz:Rollin:Ross:2017}. This sets up a one-to-one correspondence between multigraphs and partitions of natural numbers:
There is a bijection $\Phi$ such that
\begin{equation*}
\Phi(\text{partition})=\text{graph}\;,
\end{equation*}
which translates our results on graphs into statements about partitions.
If $G$ is an $(\alpha,T)$-graph, the random partition $\Phi^{-1}(G)$ may or may not be
exchangeable. The subclass of such partitions that are exchangeable are precisely
the exchangeable Gibbs partitions \citep{Gnedin:Pitman:2006,Pitman:2006}. Arrival times in such partitions,
known as \emph{record indices}, have been studied by \citet{Griffiths:Spano:2007}.
Broadly speaking, our results recover those of
Griffiths and Span\`o if $\Phi^{-1}(G)$ is exchangeable, but
show there
is a larger class of random partitions---either of Gibbs type, or not exchangeable---for
which similar results hold. Non-exchangeable examples include partitions defined
by the Yule--Simon process \citep{Yule:1925,Simon:1955}. Additionally,
our representation result for graphs yields an analogous representation for this class
of partitions; it also relates the
work of \citet{Griffiths:Spano:2007} to that of \citet*{Berger:etal:2014}
on Benjamini--Schramm limits of certain random preferential attachment graphs.
\cref{sec:degree:asymptotics} studies degree asymptotics of $(\alpha,T)$-graphs.
Properly scaled degree sequences of such graphs converge. The limiting degrees
are neutral-to-the-left sequences of random variables that satisfy a number of distributional identities.
We characterize cases in which power laws emerge, and
relate the behavior of the degree distribution to sparsity.
The range of power laws achievable is constrained by whether or not the average degree is bounded.
\cref{sec:examples} discusses examples, and shows how the class of $(\alpha,T)$-graphs overlaps
with several known models, such as
the \Barabasi--Albert model \citep{Barabasi:Albert:1999}, edge exchangeable graphs \citep{Crane:Dempsey:2016,Cai:etal:2016,Janson:2017aa},
and the preferential attachment model of \citet*{Aiello:Chung:Lu:2001,Aiello:Chung:Lu:2002}.
We obtain new results for some of these submodels. For preferential attachment graphs, for example,
limiting degree sequences are known to satisfy various
distributional identities \cite{Pekoz:Rollin:Ross:2017,James:2015aa,Janson:2006}. These results assume
fixed intervals between vertex arrival times.
We show similar results hold if arrival times are random.
Perhaps most closely related is the work of \citet*{Pekoz:Rollin:Ross:2017aa}
on random immigration times in a two-color P\'{o}lya urn, which corresponds to a certain
$(\alpha,T)$-model with \iid interarrival times.
We use this correspondence to answer a question posed in \citep{Pekoz:Rollin:Ross:2017aa} about
interarrival times with geometric distribution.
\section{Preferential attachment and arrival times}
\label{sec:representation}
Consider an undirected multigraph $g$, possibly with self-loops, with a countably infinite number of edges. The graph models considered
in the following insert edges into the graph one at a time. It is hence convenient to represent $g$ as
a sequence of edges
\begin{equation}
\label{eq:graph:1}
g=\bigl( (l_1,l_2),(l_3,l_4),\ldots \bigr)
\qquad\text{ where }l_j\in\mathbb{N}\text{ for all }j\in\mathbb{N}\;.
\end{equation}
Each pair ${(l_{2n-1},l_{2n})}$ represents an undirected edge connecting the vertices $l_{2n-1}$ and $l_{2n}$.
The vertex set of $g$ is ${\mathbf{V}(g):=\braces{l^*_1,l^*_2,\ldots}}$, the set of all distinct values
occurring in the sequence. We assume vertices are enumerated in order of appearance, to wit
\begin{equation}
\label{eq:graph:2}
l_1=1 \qquad\text{ and }\qquad l_{j+1}\leq \max\braces{l_1,\ldots,l_j}+1\quad\text{ for all }j\in\mathbb{N}\;.
\end{equation}
Consequently, $\mathbf{V}(g)$ is either a consecutive finite set $\braces{1,\ldots,m}$, or the entire set
$\mathbb{N}$. Let $\G$ be the set of multigraphs so defined, equipped with the
topology inherited from the product space $\mathbb{N}^{\infty}$, which makes it a standard
Borel space. For our purposes, a \kword{random graph} is a random element
\begin{equation*}
G=\bigl((L_1,L_2),(L_3,L_4),\ldots)
\end{equation*}
of $\G$, for $\mathbb{N}$-valued random variables $L_n$. Note that the same setup can be used to model directed multigraphs.
For a graph $g$, let ${g_n:=(l_j)_{j\leq 2n}}$ denote the subgraph given by the first
$n$ edges, and $\deg_k(n)$ the degree of vertex $k$ in $g_n$.
The \kword{arrival time} of vertex $k$ is
\begin{equation*}
t_k:=\min\braces{j\in\mathbb{N}\,\vert\,l_j=k}\;,
\end{equation*}
with ${t_k=\infty}$ if $g$ has fewer than $k$ vertices. The set of possible arrival time
sequences is
${\T:=\braces{(1=t_1<t_2<\ldots\leq\infty)}}$.
Arrival times in ${\Teven:=\braces{t\in\T\,\vert\,t_k\text{ even for }k>1}}$
is a sufficient, though not necessary, condition for $g$ to be a connected graph; it is necessary and sufficient for each $g_n$ to be connected.
If ${T=(T_1,T_2,\ldots)}$ is a random sequence of arrival times, the
interarrival times
\begin{equation*}
\Delta_k:=T_k-T_{k-1} \qquad\text{ where }T_0:=0
\end{equation*}
are random variables with values in ${\mathbb{N}\cup\braces{\infty}}$.
\subsection{Degree-biased random graphs}
The term preferential attachment describes a degree bias: Recall that the
\kword{degree-biased} distribution on the vertices of a graph
$g_n$ with $n$ edges is ${P(k\,;g_n):=\deg_k(n)/2n}$.
We embed $P$ into a one-parameter family of
laws
\begin{equation*}
P_{\alpha}(k\,;g_n)
:=
\frac{\deg_k(n)-\alpha}{2n-\alpha|\mathbf{V}(g_n)|}
\qquad
\text{ for }\alpha\in(-\infty,1)\;,
\end{equation*}
the \kword{$\alpha$-degree biased} distributions.
Both $P$ and $P_{\alpha}$ are defined on a graph $g_n$, in which each edge
is either completely present or completely absent. To permit edges to be
generated ``one end at a time'', we observe $P_{\alpha}$ can be rewritten as
\begin{equation} \label{eq:p:alpha}
P_\alpha(k\,;l_1,\ldots,l_j)=
\frac{|\braces{i\leq j|l_j=k}|-\alpha}{j-\alpha\max\braces{l_1,\ldots,l_j}}
\qquad\text{ for }k\leq \max\braces{l_1,\ldots,l_j}\;,
\end{equation}
which is well-defined even if $j$ is odd.
\begin{graphscheme} Given are ${\alpha\in(-\infty,1)}$ and ${t\in\T}$. Generate
${L_1,L_2,\ldots}$ as
\vspace{-.4em}
\begin{equation*}
L_n:=k \quad\text{ if } n=T_k
\qquad\text{ and }\qquad
L_n\sim P_{\alpha}(\argdot\,;L_1,\ldots,L_{n-1})\quad\text{ otherwise}\;.
\end{equation*}
\end{graphscheme}
Then ${G:=((L_1,L_2),(L_3,L_4),\ldots)}$ is a random graph, whose law we denote ${\DB(\alpha,t)}$.
The sequence $t$ may additionally be randomized: We call $G$
an $(\alpha,T)$-graph if its law is $\DB(\alpha,T)$, for some
random element $T$ of $\T$. Examples of multigraphs generated using different distributions for $T$ are shown in \cref{fig:examples}.
As a consequence of the family of laws \eqref{eq:p:alpha}, the finite-dimensional distributions of $(\alpha,t)$-graphs have a simple product form.
\begin{proposition} \label{prop:exact:probability}
Let $G_{n}$ be an $(\alpha,t)$-graph with $n/2$ edges and $k$ vertices. Then
\begin{align} \label{eq:exact:probability}
\bbP_{\alpha,t}[G_{n/2} & = ((L_1,L_2),\dotsc,(L_{n-1},L_{n}))] \nonumber \\
& = \frac{1}{\Gamma(n-k\alpha)} \prod_{j=1}^{k} \frac{\Gamma(T_j - j\alpha) \Gamma(\#\{L_i=k \mid i \leq n\} - \alpha)}{\Gamma(T_j - 1 - (j-1)\alpha + \delta_1(j)) \Gamma(1 - \alpha)} \;.
\end{align}
\end{proposition}
\subsection{Representation result}
Fix ${\alpha\in(-\infty,1)}$ and ${t\in\T}$.
Let ${\Psi_1,\Psi_2,\ldots}$ be independent random variables with $\Psi_1 = 1$,
\begin{equation}
\label{eq:sb:1}
\Psi_j\sim\text{Beta}\bigl(1-\alpha,t_j-1-(j-1)\alpha\bigr) \quad \text{for} \quad j\geq 2 \;,
\end{equation}
and define
\begin{equation}
\label{eq:sb:2}
W_{j,k}:=
\sum_{i = 1}^{j}
\Psi_i\prod_{\ell = i+1}^k (1-\Psi_{\ell})
\quad\text{ and }\quad
I_{j,k}:=[W_{j-1,k},W_{j,k}) \text{ with }W_{0,k}=0\;.
\end{equation}
Note that $W_{j,k}=\prod_{\ell=j+1}^k (1-\Psi_{\ell})$ and $W_{k,k}=1$. Hence, ${\cup_{j=1}^k I_{j,k}=[0,1)}$. Generate a random sequence ${U_1,U_2,\ldots\simiid\Uniform[0,1)}$. For each $n$, let $t_{k(n)}$ be the preceding arrival time,
\ie the largest $t_k$ with ${t_k\leq n}$, and set
\begin{equation}
\label{eq:sb:3}
L_n :=\begin{cases}
k(n) & \text{ if } n=t_{k(n)}\\
j \text{ such that } U_n\in I_{j,k(n)} & \text{ otherwise}
\end{cases}\;.
\end{equation}
Then ${H(\alpha,t):=((L_1,L_2),\ldots)}$ is a random element of $\G$.
\begin{theorem}
\label{theorem:sb}
A random graph $G$ is an $(\alpha,T)$-graph
if and only if
${G\equdist H(\alpha,T)}$ for some ${\alpha\in(-\infty,1)}$
and a random element $T$ of $\T$.
\end{theorem}
Products of the form \eqref{eq:sb:2}, for the same sequence of beta variables $\Psi_j$, previously have
appeared in two separate contexts:
\citet{Griffiths:Spano:2007} identify $\Psi_j W_{j,\infty}$ as the limiting relative block sizes in
exchangeable Gibbs partitions, conditioned on the block arrival times. This corresponds to the special
case of \cref{theorem:sb} where the random variables ${(L_1,L_2,\ldots)}$ define an exchangeable Gibbs
partition (see \cref{sec:graphs:urns,sec:exchangeable:partitions}). In work of \citet*{Berger:etal:2014}, a version of \eqref{eq:sb:1}--\eqref{eq:sb:3} arises
as the representation of the Benjamini--Schramm limit of certain preferential attachment graphs
(in which case all interarrival times are fixed to a single constant).
That two problems so distinct lead to the same
(and arguably not entirely obvious) distribution
raises the question whether \eqref{eq:sb:1} can be understood in a more
conceptual way. One such way is by regarding the graph as a recursive sequence of P\'{o}lya urns: Conditionally on an edge attaching to one of the first $k$ vertices, it attaches to vertex $k$ with probability $\Psi_k$ and one of the first $k-1$ with probability $1-\Psi_k$, and so on for $k-1,\dotsc,2$. A related interpretation is in terms of the special properties of beta and gamma random variables.
Let $\gvar_a$ and $\bvar_{a,b}$ generically denote a gamma random variable with parameters $(a,1)$ and a beta
variable with parameters $(a,b)$.
Beta and gamma random variables satisfy a set of relationships sometimes referred to collectively
as the \emph{beta-gamma algebra} \citep[e.g.][]{Revuz:Yor:1999}. These relationships revolve around the fact that, if $\gvar_a$ and
$\gvar_b$ are independent, then
\begin{equation}
\label{eq:beta:gamma:algebra}
\bigl(\gvar_{a+b},\bvar_{a,b}\bigr)
\equdist
\Bigl(\gvar_a+\gvar_b,\frac{\gvar_a}{\gvar_a+\gvar_b}\Bigr)\;,
\end{equation}
where the pair on the left is independent, and so is the pair on the right.
In the context of $(\alpha,T)$-graphs, conditionally on the sequence ${\Delta_1,\Delta_2,\ldots}$ of interarrival times,
generate two sequences of gamma variables
\begin{equation*}
\gvar^{(1)},\gvar^{(2)},\ldots\simiid\GammaDist(1-\alpha,1)
\qquad\text{ and }\qquad
\gvar_{\Delta_2-1},\gvar_{\Delta_3-1},\ldots\;,
\end{equation*}
all mutually independent given ${(\Delta_k)}$.
The variables $\Psi_k$ can then be represented as
\begin{equation}
\label{eq:recursion}
\Psi_j\quad\equdist\quad\frac{\gvar^{(j)}}{\sum_{i\leq j}\gvar^{(i)}+\sum_{i<j}\gvar_{\Delta_{i+1}-1}}\quad=:\quad\Psi_j'\;.
\end{equation}
Such recursive fractions are not generally independent, but as a consequence of \eqref{eq:beta:gamma:algebra},
equality in law holds even jointly, ${(\Psi_1,\Psi_2,\ldots)\equdist(\Psi_1',\Psi_2',\ldots)}$,
recovering the variables in \cref{theorem:sb}.
Identity \eqref{eq:beta:gamma:algebra} further implies
${\bvar_{a,b+c}\equdist \bvar_{a,b}\bvar_{a+b,c}}$, again with independence on the right.
Abbreviate
\begin{equation*}
\tau_j:=t_j-1+\alpha(j-1) \qquad\text{ such that \eqref{eq:sb:1} becomes }\qquad \Psi_j\equdist \bvar_{1-\alpha,\tau_j}\;.
\end{equation*}
The recursion \eqref{eq:recursion} then implies
\begin{equation*}
\bvar_{1-\alpha,\tau_j}\equdist \bvar_{1-\alpha+\tau_{j-1},\Delta_j-\alpha}\bvar_{1-\alpha,\tau_{j-1}}
\quad\text{ hence }\quad
\Psi_j|\Psi_{j-1}\equdist\Psi_{j-1}\bvar_{1-\alpha+\tau_{j-1},\Delta_j-\alpha}\;,
\end{equation*}
with independence on the right of both identities.
Informally, one may think of ${\gvar^{(k)}}$ as an (unnormalized) propensity of vertex $k$ to attract edges, of those edges attaching to one of the first $k$ vertices. The requisite
normalization in \eqref{eq:recursion}
depends on propensities of previously created vertices
(represented by the variables ${\gvar^{(1)},\ldots,\gvar^{(k-1)}}$), and contributions of the ``head start'' given to previously created vertices
(represented by the variables $\gvar_{\Delta_j - 1}$).
\section{Graphs and urns}
\label{sec:graphs:urns}
Any graph in $\G$ defines a partition of $\mathbb{N}$, and vice versa.
This fact is used below to classify $\alpha$-degree biased graphs according to the
properties of the random partition they define. More precisely,
a \kword{partition} of $\mathbb{N}$ is a sequence
${\pi=(b_1,b_2,\ldots)\subset\mathbb{N}}$ of subsets, called \kword{blocks},
such that each ${n\in\mathbb{N}}$ belongs to one and only one block.
The set of all partitions is denoted ${\mathcal{P}(\mathbb{N})}$, and
inherits the topology of ${\mathbb{N}^{\infty}}$.
A partition can equivalently
be represented as a sequence ${\pi=(l_1,l_2,\ldots)}$ of block labels, where
${l_j=k}$ means ${j\in b_k}$. There is hence a bijective map
\begin{equation*}
\map:\mathcal{P}(\mathbb{N})\rightarrow\G
\qquad\text{ given by }\qquad
(l_1,l_2,\ldots)\mapsto\bigl((l_1,l_2),(l_3,l_4),\ldots\bigr)\;,
\end{equation*}
which is a homeomorphism of $\mathcal{P}(\mathbb{N})$ and $\G$.
It identifies blocks of $\pi$ with vertices of ${g=\map(\pi)}$.
In population genetics, the smallest element of the $k$th block of a partition $\pi$ is known
as a \emph{record index} \cite{Griffiths:Spano:2007}.
Thus, the $k$th arrival time in $g$ is precisely the $k$th
record index of $\pi$.
The generative process of a random partition $\Pi$ can be thought of as an urn: Start with an empty urn, and add consecutively
numbered balls one at a time, each colored with a randomly chosen color. Colors may reoccur, and are
enumerated in order of first appearance. Let $B_k(n)$ be the set of all balls sharing
the $k$th color after $n$ balls have been added. For ${n\rightarrow\infty}$, one obtains
a random partition ${\Pi=(B_1,B_2,\ldots)}$ of $\mathbb{N}$, with blocks ${B_k:=\cup_n B_k(n)}$.
In analogy to the $(\alpha,t)$-graphs above, we define:
\begin{urnscheme} Given are ${\alpha\in(-\infty,1)}$ and ${t\in\T}$.
\vspace{-.4em}
\begin{itemize}
\item If ${n=t_k}$ for some $k$, add a single ball of a new, distinct color to the urn.
\item Otherwise, add a ball of a color already present in the urn, where
the $j$th color is chosen with probability proportional to
${|B_j(n)|-\alpha}$.
\end{itemize}
\end{urnscheme}
A familiar special case of such an urn is the P\'olya urn with $m$ colors, obtained for
${\alpha=0}$ and ${t=(1,2,\ldots,m,\infty,\infty,\ldots)}$. Another is the two-parameter
Chinese restaurant process \citep{Pitman:2006}, also known as the Blackwell--MacQueen urn \citep{Blackwell:MacQueen:1973,Pitman:1996aa}: If $t$ is randomized by
generating ${(T_1=1,T_2,T_3,\ldots)}$ according to
\begin{equation} \label{eq:crp:arrivals}
\mathbb{P}[T_{k+1}=T_{k}+t \on T_k]
=
(\theta + \alpha k)
\frac{ \Gamma(\theta + T_k) \Gamma(T_k + t - 1 - \alpha k) }{ \Gamma(\theta + T_k + t) \Gamma(T_k - \alpha k) }\;,
\end{equation}
for some ${\theta>-\alpha}$,
the partition has law ${\text{CRP}(\alpha,\theta)}$.
In general, $(\alpha,t)$-urns define a class of random partitions $\Pi$ that are \emph{coherent}, in the sense that
\begin{align*}
\bbP( \Pi_{n-1} = \{B_1,\dotsc,B_k\} ) = \sum_{j=1}^{k+1} \bbP( \Pi_{n} = \mathcal{A}_{n\to j}(\Pi_{n-1}) ) \;,
\end{align*}
where $\mathcal{A}_{n\to j}(\Pi_{n-1})$ denotes the operation of appending $n$ to block $B_j$ in $\Pi_{n-1}$. Partitions for which these probabilities depend only on the sizes of the blocks, and which are therefore invariant under permutations of the elements, are exchangeable random partitions \cite{Pitman:2006}. That is, there is an \emph{exchangeable partition probability function} (EPPF) $p(\cdot)$, symmetric in its arguments, such that
\begin{align*}
p(|B_1|,\dotsc,|B_k|) = \bbP(\Pi_n = \{ B_1,\dotsc, B_k \}) \;,
\end{align*}
which is invariant under the natural action of the symmetric group. A special subclass are the exchangeable partitions of \emph{Gibbs type}, for which the EPPF has the unique product form \cite{Gnedin:Pitman:2006}
\begin{align} \label{eq:egp:eppf}
p( |B_1|,\dotsc,|B_k| ) =
V_{n,k} \prod_{j=1}^k \frac{\Gamma(|B_j| - \alpha)}{\Gamma(1 - \alpha)} \;,
\end{align}
for a suitable sequence of coefficients $V_{n,k}$ satisfying the recursion
\begin{align} \label{eq:egp:recursion}
V_{n,k} = (n - \alpha k) V_{n+1,k} + V_{n+1,k+1} \;.
\end{align}
The distribution of the arrival times can be deduced from \eqref{eq:egp:eppf} and \eqref{eq:egp:recursion} as
\begin{align} \label{eq:egp:arrivals}
\bbP[T_{k+1} = T_k + t \on T_k] = \frac{\Gamma(T_{k} + t - 1 - \alpha k)}{\Gamma(T_k - \alpha k)} \frac{V_{T_{k} + t,k+1}}{V_{T_k,k}} \;,
\end{align}
of which \eqref{eq:crp:arrivals} for the CRP is a special case. Denote the law of $T_1,\dotsc,T_k$ generated by \eqref{eq:egp:arrivals} as $P_{\alpha,V}(T_1,\dots,T_k)$.
Alternatively, consider the $(\alpha,T)$-urn counterpart of the EPPF, given in \cref{prop:exact:probability},
\begin{align} \label{eq:cppf}
p_{\alpha,T}( |B_1|,\dotsc,|B_k|; & \ T_1,\dots,T_k ) = \bbP[\Pi_n = \{B_1,\dotsc,B_k\} \mid T_1,\dots,T_k ] \\
& = \frac{1}{\Gamma(n - k\alpha)} \prod_{j=1}^{k} \frac{\Gamma(T_j - j\alpha) \Gamma(|B_j| - \alpha)}{\Gamma(T_j - 1 - (j-1)\alpha + \delta_1(j)) \Gamma(1 - \alpha)} \nonumber \;.
\end{align}
Define
\begin{align} \label{eq:cond:gibbs:v}
V_{n,k}^{\alpha,T} := \frac{1}{\Gamma(n - k\alpha)} \prod_{j=1}^{k} \frac{\Gamma(T_j - j\alpha)}{\Gamma(T_j - 1 - (j-1)\alpha + \delta_1(j))} \;,
\end{align}
in which case \eqref{eq:cppf} takes on the Gibbs-like form
\begin{align*}
p_{\alpha,T}( |B_1|,\dotsc,|B_k|; & \ T_1,\dots,T_k ) = V_{n,k}^{\alpha,T} \prod_{j=1}^k \frac{\Gamma(|B_j| - \alpha)}{\Gamma(1 - \alpha)} \;.
\end{align*}
This general formula holds for all $(\alpha,T)$-urns. In the case that $\Pi$ is exchangeable, these relationships imply a further characterization of exchangeable Gibbs partitions: \eqref{eq:egp:eppf} is obtained by marginalizing the arrival times from \eqref{eq:cppf} according to $P_{\alpha,V}$.
\begin{proposition}
Let $\Pi$ be a random partition generated by an $(\alpha,T)$-urn, with finite-dimensional conditional distributions given by \eqref{eq:cppf}. Then $\Pi$ is exchangeable if and only if there exists some sequence of coefficients $V = (V_{n,k})$ satisfying
\begin{align*}
V_{n,k} & = \sum_{\substack{T_1,\dotsc,T_k \\ T_k \leq n}} \frac{P_{\alpha,V}(T_1,\dots,T_k) }{\Gamma(n - k\alpha)} \prod_{j=1}^{k} \frac{\Gamma(T_j - j\alpha)}{\Gamma(T_j - 1 - (j-1)\alpha + \delta_1(j))}
= \bbE[V_{n,k}^{\alpha,T}] \;,
\end{align*}
for all $k\leq n$, in which case \eqref{eq:egp:eppf} holds and $\Pi$ is an exchangeable Gibbs partition.
\end{proposition}
It is straightforward to verify that $\Phi(\Pi)$ is an $(\alpha,t)$-graph if and only if
$\Pi$ is an $(\alpha,t)$-urn. This correspondence is used in \cref{sec:examples} to
classify some ${(\alpha,t)}$-graphs according to the urns they define.
It also allows us to translate properties of random graphs into properties of random partitions,
and vice versa. \cref{theorem:sb} implies the following result on partitions,
which gives a representation of exchangeable Gibbs partitions.
\begin{corollary}
\label{corollary:G:S}
A random partition $\Pi$ is an ${(\alpha,t)}$-urn if and only if
it is distributed as ${\Pi\equdist (L_1,L_2,\ldots)}$,
for variables $L_n$ generated according to \eqref{eq:sb:1}--\eqref{eq:sb:3}.
\end{corollary}
\section{Degree asymptotics}
\label{sec:degree:asymptotics}
Let $G$ be an $(\alpha,t)$-graph, and $G_n$ the subgraph given by its first $n$ edges. The
\kword{degree sequence} of $G_n$ is the vector $\mathbf{D}(n)=(\deg_k(n))_{k\geq 1}$, where vertices are ordered by appearance. Denote by $m_d(n)$ the number of vertices in $G_n$ with degree $d$. The \kword{empirical degree distribution}
\begin{align*}
(p_d(n))_{d\geq 1} := |\mathbf{V}(g_n)|^{-1}(m_d(n))_{d\geq 1}
\end{align*}
is the probability that a vertex sampled uniformly at random from $G_n$ has degree $d$. The degree sequence and the degree distribution as $G_n$ grows large are characterized by the scaling behavior induced by $\alpha$ and $t$, which yields
power laws and related properties.
\subsection{Linear and sub-linear regimes}
\label{sec:linear:sublinear}
As will become clear in the next section, the scaling behavior of $(\alpha,t)$-graphs is the result of products of the form
\begin{align} \label{eq:tail:product}
W_{j,k} = \prod_{i=j+1}^k (1-\Psi_i) \quad \text{as} \quad k\to\infty \;,
\end{align}
where $(\Psi_j)_{j>1}$ are as in \eqref{eq:sb:1}. In particular, two regimes of distinct limiting behavior emerge. To which of the two regimes an $(\alpha,t)$-graph belongs is determined by whether or not $W_{j,k}$ converges to a non-zero value as $k\to\infty$.
We consider $(\alpha,t)$-graphs that satisfy the following assumption:
\begin{align} \label{eq:vertex:arrival:rate}
|\mathbf{V}(G_n)|/n^{\sigma} \xrightarrow[n\to\infty]{\text{\small a.s.}} \mu_{\sigma}^{-\sigma} \quad \text{ for some } \quad 0 < \sigma \leq 1 \quad \text{ and } \quad 0 < \mu_{\sigma} < \infty \;.
\end{align}
Slower vertex arrival rates (\egc, logarithmic) result in graphs that are almost surely dense (see \cref{sec:degree:distributions}), and as such exhibit less interesting structural properties. For example,
in order to generate power law distributions in $(\alpha,t)$-graphs, the asymptotic arrival rate must be super-logarithmic, which follows from work on exchangeable random partitions and can be read from \cite[Chapter 3]{Pitman:2006}.
For a growing graph sequence satisfying the assumption \eqref{eq:vertex:arrival:rate}, consider the limiting average degree,
\begin{align*}
\lim_{n\to\infty} \bar{d}_n = \lim_{n\to\infty} \frac{2n}{\mu^{-\sigma}_{\sigma}n^{\sigma}} = \lim_{n\to\infty} \frac{2}{\mu^{-\sigma}_{\sigma}} n^{1-\sigma} \;.
\end{align*}
The average degree is almost surely bounded if $\sigma=1$, which we call the \kword{linear} regime; for $\sigma\in(0,1)$, the \kword{sub-linear} regime, it diverges. This is a consequence of \cref{prop:tail:product:convergence} below: For a graph $G_n$ on $k(n)$ vertices, the probability that the end of edge $n+1$ is attached to vertex $j$ is equal to $\Psi_j W_{j,k(n)}$, which results in vertex $j$ participating in a constant proportion of edges if and only if $ W_{j,k(n)}$ is bounded away from zero as $n$ grows large.
\begin{proposition} \label{prop:tail:product:convergence}
For fixed $\alpha\in(-\infty,1)$ and $t\in\T$ such that \eqref{eq:vertex:arrival:rate} is satisfied for some $\sigma\in(0,1]$, let $W_{j,k}$ be as in \eqref{eq:tail:product}. Then for each $j\geq 1$, $W_{j,k}$ converges almost surely as $k\to\infty$ to some random variable $W_{j,\infty}$, which is non-zero if and only if $\sigma<1$.
\end{proposition}
\begin{remark}
For slower vertex arrivals (\egc logarithmic) or when the limiting number of vertices is finite, $W_{j,k(n)}$ also converges to a non-zero value.
\end{remark}
\subsection{Limiting joint distributions of degree sequences for given arrival times}
\label{sec:deg:dist:fixed}
The previous section suggests that the limit of the scaled degrees should depend on the random variables $(\Psi_j)_{j>1}$. Indeed, for any $(\alpha,t)$-graph $G$, it can be shown that for any $r\in\bbN_+$,
\begin{align} \label{eq:limit:seq:full}
\bigl( n^{-1} \deg_j(n) \bigr)_{1\leq j \leq r} \xrightarrow[n\to\infty]{\text{\small a.s.}} \bigl( \xi_j \bigr)_{1\leq j \leq r} \quad \text{where} \quad \xi_j \equdist \Psi_j \prod_{i=j+1}^{\infty} (1-\Psi_i) \;,
\end{align}
and $(\Psi_j)_{j>1}$ are as in \eqref{eq:sb:1}. \Citet{Griffiths:Spano:2007} showed that relative degrees with such a limit uniquely characterize exchangeable Gibbs partitions among all exchangeable partitions; if the random partition ${\Phi^{-1}(G)}$ is exchangeable, that result applies to $G$ (see \cref{sec:exchangeable:partitions}). For a general ${(\alpha,t)}$-graph, ${\Phi^{-1}(G)}$ need not be exchangeable, and indeed there are examples for which ${n^{-1}\mathbf{D}(n)}$ converges to zero, in which case $W_{j,k(n)}$ does, as well.
In such cases, one may ask more generally whether a finite, non-zero limit
\begin{align*}
\mathbf{D}_{\infty} := \lim_{n\to\infty} n^{-1/\gamma} \mathbf{D}(n) \;,
\end{align*}
exists for an appropriate scaling exponent $\gamma$. \cref{thm:limiting:degree:sequence} establishes that this is true for $(\alpha,t)$-graphs.
\begin{theorem} \label{thm:limiting:degree:sequence}
Let $G$ be an $(\alpha,t)$-graph for some $\alpha\in(-\infty,1)$ and $t\in \T$. Then \eqref{eq:limit:seq:full} holds. If $t$ is such that \eqref{eq:vertex:arrival:rate} holds with $\sigma=1$, assume
\begin{align*}
\lim_{j\to \infty} \frac{t_j}{j} = \mu \in (1,\infty) \;.
\end{align*}
Then for every $r\in \bbN_+$, there exists a positive, finite constant $M_r(t)$, and positive random variables $\zeta_1,\dotsc,\zeta_r$ such that
\begin{align*}
M_r(t) n^{-r/\gamma} \deg_1(n)\dotsm \deg_r(n) \xrightarrow[n\to\infty]{\text{\small a.s.}} \zeta_1 \dotsm \zeta_r \quad \text{where} \quad \gamma = \frac{\mu - \alpha}{\mu - 1} \;.
\end{align*}
The mixed moments also converge: For any $p_1,\dotsc,p_r > -(1-\alpha)/2$ with $\bar{p}=\sum_{j=1}^r p_j$, there exists some $M_{\bar{p}}(t)\in(0,\infty)$ such that
\begin{align} \label{eq:limiting:degree:sequence}
M_{\bar{p}}(t) \mathbb{E}\bigl[ \lim_{n\to\infty} n^{-\bar{p}/\gamma} \deg_1(n)^{p_1}\dotsm \deg_r(n)^{p_r}\bigr] = \mathbb{E}\bigl[\zeta_1^{p_1}\dotsm \zeta_r^{p_r}\bigr]\;.
\end{align}
Furthermore,
\begin{align} \label{eq:degree:sequence:g}
\bigl(\zeta_j \bigr)_{1\leq j \leq r} \equdist \bigl( \Psi_j \prod_{i=j+1}^r (1 - \Psi_i) \bigr)_{1\leq j \leq r} \;,
\end{align}
where $\Psi_1=1$ and $(\Psi_j)_{j>1}$ are as in \eqref{eq:sb:1}.
\end{theorem}
In the sub-linear regime, \eqref{eq:limit:seq:full} agrees with and generalizes the result of \cite{Griffiths:Spano:2007} for exchangeable Gibbs partitions (though the proof uses different methods). In the linear regime, the mixed moments of the scaled degrees also converge to those of products of independent beta random variables.
However, the result does \emph{not} completely describe the joint distributions, due to the presence of
the unknown scaling terms $M_{\bar{p}}(t)$. These terms depend on the moments $p_1,\dotsc,p_r,$ and on $t$, and express the randomness remaining, for large $k(n)$, in $W_{j,k(n)}$ after the part that scales with $n$ is removed; in particular, they result from early fluctuations of the process. \cref{sec:examples} provides stronger results in several cases for which these terms are
well-behaved.
\subsection{Neutrality}
It was noted in \cref{sec:graphs:urns} that the map $\Phi^{-1}$ from graphs to partitions translates results on graphs into results on partitions. Conversely, one can transfer properties from partitions to graphs. A sequence ${(X_1,X_2,\ldots)}$ of random variables is \kword{neutral-to-the-left} (NTL) if the relative increments
\begin{align*}
X_1,\frac{X_2}{X_1+X_2},\ldots,\frac{X_j}{\sum_{i=1}^j X_{i}},\ldots
\end{align*}
are independent random variables in (0,1) \cite{Doksum:1974,Griffiths:Spano:2007}. If $\Pi$ is an exchangeable partition, \citet{Griffiths:Spano:2007} show that the limiting relative block sizes of $\Pi$ are NTL if and only if $\Pi$ is an exchangeable Gibbs partition. If so, the random graph ${\Phi(\Pi)}$ has a limiting degree sequence $\mathbf{D}_{\infty}$ that is NTL. Due to the representation in \cref{theorem:sb}, this property generalizes
beyond the exchangeable case:
\begin{corollary}
The limiting degree sequence $\mathbf{D}_{\infty}$ of an $(\alpha,t)$-graph is NTL.
\end{corollary}
\subsection{Sparsity and power law degrees}
\label{sec:degree:distributions}
Suppose $G_n$ is the subgraph of an $(\alpha,T)$-graph $G$, given by the first $n$ edges. Since $G_n$ is finite,
one can sample a vertex uniformly at random from its vertex set
and report its degree $D_n$. One can then ask whether the sequence of random degrees $D_n$ converges in distribution
to some limiting variable $D$. We show in this section that that is indeed the case for $(\alpha,T)$-graphs, under some regularity conditions. We also show how the degree distribution is related to the sparsity, or, equivalently, the edge density, of $(G_n)$.
The sequence ${(G_n)}$ is defined to be $\sparsity$-\kword{dense} if
\begin{align}
\label{epsilon:density}
\underset{n\to\infty}{\lim\sup} \frac{n}{|\mathbf{V}(G_n)|^{\sparsity}} = c_{\sparsity} > 0 \quad \text{for some} \quad
\sparsity\geq 1 \;.
\end{align}
If $\sparsity < 2$, the graph sequence is typically called \emph{sparse}; when $\sparsity \geq 2$, the sequence is \emph{dense}. Note that ${\sparsity > 2}$ is only possible for multigraphs.
The level of sparsity follows from $\sigma$: Graph models in the linear regime
correspond to $\sparsity = 1$ \cite{Bollobas:Janson:Riordan:2007,Berger:etal:2014,Aiello:Chung:Lu:2002}; graph models in the sub-linear regime with $\sigma > \frac{1}{2}$ have appeared in the literature \cite{Caron:Fox:2017,Veitch:Roy:2015,Crane:Dempsey:2016,Cai:etal:2016}, with $1 < \sparsity < 2$. See \cref{sec:examples} for examples.
For functions $a$ and $b$, we use the notation
\begin{align*}
a(n)\limscale{n}{\infty} b(n)
\quad:\Leftrightarrow\quad
\lim_{n\to\infty} a(n)/b(n) \to 1 \;.
\end{align*}
The sequence $(G_n)$ has \kword{power law degree distribution} with exponent $\eta > 1$ if
\begin{align}
p_d(n) = \frac{m_d(n)}{|\mathbf{V}(G_n)|} \xrightarrow[n\to\infty]{} p_d \limscale{d}{\infty} L(d)d^{-\eta} \quad \text{for all large $d$} \;,
\end{align}
for some slowly varying function $L(d)$, that is, ${\lim_{x\to\infty} L(rx)/L(x) = 1}$ for all ${r>0}$ \cite{Bingham:1989,Feller:1971}.
In the sub-linear regime, the degree distribution follows from results due to Pitman and Hansen \cite[Lemma 3.11]{Pitman:2006}, see also \cite{Gnedin:Hansen:Pitman:2007}, on the limiting block sizes of exchangeable random partitions (see \cref{sec:exchangeable:partitions} for more details). In particular, if \eqref{eq:vertex:arrival:rate} is satisfied by an $(\alpha,t)$-graph $G^{\alpha}=\Phi(\Pi^{\alpha})$ for $\sigma=\alpha\in(0,1)$, then there exist an exchangeable random partition $\Pi$ and a positive, finite random variable $S$ such that ${|\mathbf{V}(\Phi(\Pi_{2n}))|/n^{\alpha} \xrightarrow[n\to\infty]{\text{\small a.s.}} S}$, $\Pi=\Pi^{\alpha}$ and $S=\mu_{\alpha}$. The limiting degree distribution is
\begin{equation}
\label{eq:degree:distn:sublinear}
p^{\alpha}_d
\quad=\quad
\alpha \frac{\Gamma(d - \alpha)}{\Gamma(d+1) \Gamma(1 - \alpha)}
\quad\limscale{d}{\infty}\quad
\frac{\alpha}{\Gamma(1-\alpha)} d^{-(1+\alpha)} \;.
\end{equation}
In the linear regime, $\sigma=1$, with limiting mean interarrival time $\mu_1$. We show (see \cref{sec:proof:degree:distributions}) that the resulting limiting degree distribution is a generalization of the classical Yule--Simon distribution (which corresponds to $\alpha=0$) \cite{Yule:1925,Simon:1955,Durrett:2006},
\begin{equation}
\label{eq:degree:distn:linear}
p^{\gamma}_d
\quad=\quad
\gamma \frac{ \Gamma(d - \alpha) \Gamma(1 - \alpha + \gamma) }{ \Gamma(d + 1 - \alpha + \gamma) \Gamma(1 - \alpha) }
\quad
\limscale{d}{\infty}
\quad
\gamma \frac{ \Gamma(1 - \alpha + \gamma) }{ \Gamma(1 - \alpha) }d^{-(1+ \gamma)} \;,
\end{equation}
where ${\gamma := \frac{\mu_{1} - \alpha}{\mu_{1} - 1}}$, as in \eqref{eq:limiting:degree:sequence}.
The tail behavior of the two distributions \eqref{eq:degree:distn:sublinear}, \eqref{eq:degree:distn:linear} partition the range of possible values of the power law exponent, as summarized by the following theorem.
\begin{theorem}
\label{theorem:degree:distn}
Let $G$ be a random $(\alpha,T)$-graph for some $\alpha \in (-\infty,1)$ and $T\in\T$. If
\begin{align*}
T_j j^{-1/\sigma} \xrightarrow[j\to\infty]{\text{\small a.s.}} \mu_{\sigma} \quad \text{ for some } \quad \sigma \in (0,1] \quad \text{ and } \quad 1 < \mu_{\sigma} < \infty \;,
\end{align*}
then $G$ has $\sparsity$-density with $\sparsity=1/\sigma$. If $\sigma=1$, assume that $\bbE[\Delta_j]=\mu_1$, ${\text{Var}(\Delta_j) < \infty}$ for all $j\in\bbN_+$, and $\lvert\text{Cov}(\Delta_i,\Delta_j)\rvert \leq C^2_{\Delta}\lvert i-j\rvert^{-\ell_{\Delta}}$ for all $i,j > 1$, some $C^2_{\Delta}\geq 0$, and some $\ell_{\Delta}>0$.
Then the degree distribution converges asymptotically,
\begin{align*}
\frac{m_d(n)}{|\mathbf{V}(G_n)|} \xrightarrow[n\to\infty]{\text{\small p}}
\begin{cases}
p^{\alpha}_d & \text{ if } \sigma = \alpha \in (0,1) \\
p^{\gamma}_d & \text{ if } \sigma = 1
\end{cases} \;,
\end{align*}
which for large $d$ follows a power law with exponent
\begin{align*}
\eta =
\begin{cases}
1 + \alpha & \in (1,2) \quad\;\; \text{ if } \sigma = \alpha \in (0,1) \\
1 + \gamma & \in (2,\infty) \quad \text{ if } \sigma = 1
\end{cases} \;.
\end{align*}
\end{theorem}
The distributions \eqref{eq:degree:distn:sublinear}, \eqref{eq:degree:distn:linear} have the following representation, which is useful for generating realizations from those distributions.
\begin{corollary} \label{corollary:rep:1}
Let $G$ be a random $(\alpha,T)$-graph for some $\alpha \in (-\infty,1)$ and $T\in\T$ satisfying the conditions of \cref{theorem:degree:distn}. Then the degree $D_n$ of a vertex sampled uniformly at random from $G_n$ converges in distribution to $D'$, where $D'$ is sampled as
\begin{align*}
& D' \sim \Geom(\bvar)
\quad\text{ for }\quad
\bvar \sim
\begin{cases}
\BetaDist(\alpha,1 - \alpha) & \text{ if } \sigma = \alpha \in (0,1) \\
\BetaDist(\gamma,1-\alpha) & \text{ if } \sigma = 1
\end{cases} \;.
\end{align*}
\end{corollary}
This representation can be refined further: The proof of \cref{theorem:degree:distn} shows, by extending techniques introduced by \citet*{Berger:etal:2014}, that
the neighborhood of a random vertex can be coupled to a Poisson point process on the unit interval.
That yields the following representation:
\begin{corollary} \label{corollary:rep:2}
Let $G$ be a random $(\alpha,T)$-graph for some $\alpha \in (-\infty,1)$ and $T\in\T$ satisfying the conditions of \cref{theorem:degree:distn}. Then the degree $D_n$ of a vertex sampled uniformly at random from $G_n$ converges in distribution to $D'$, where $D'$ is sampled as
\begin{equation*}
D' \sim \Poisson\left(\frac{1 - \bvar}{\bvar} \ \gvar_{1-\alpha} \right)
\quad\text{ for }\quad
\bvar \sim
\begin{cases}
\BetaDist(\alpha,1) & \text{ if } \sigma = \alpha \in (0,1) \\
\BetaDist(\gamma,1) & \text{ if } \sigma = 1
\end{cases} \;.
\end{equation*}
\end{corollary}
\begin{remark}
Based on the fact that $(1-\bvar)/\bvar$ is distributed as a so-called beta prime random variable, additional distributional identities may be deduced. To give one, let $\gvar_1$, $\gvar_{\alpha}$, and $\gvar_{\gamma}$ be independent Gamma random variables. Then one can replace $(1-\bvar)/\bvar$ above by $\gvar_1/\gvar_{\alpha}$ (for ${\sigma=\alpha < 1}$) or $\gvar_1/\gvar_{\gamma}$ (for ${\sigma=1}$).
\end{remark}
\subsection{A note on almost surely connected graphs}
\label{sec:connected:graphs}
Suppose one requires each graph in the evolving sequence $(G_n)$ drawn from an $(\alpha,T)$-graph to be
almost surely connected. That holds if and only if ${T\in\Teven}$, \ie if each arrival time after ${T_1=1}$ is even.
A simple way to generate $T\in\Teven$ is to sample $\Delta_2,\Delta_3,\dotsc$ as in the generation of general $T\in\T$, and to set
\begin{align} \label{eq:teven}
T_2 = 2\Delta_2, \quad T_k = T_{k-1} + 2\Delta_k \quad\text{for}\quad k > 2 \;.
\end{align}
In the sub-linear regime, doubling the interarrival times
does not affect the degree asymptotics.
In the linear regime, the change has noticeable affect. For example, suppose the variables $\Delta_k$ above are drawn
\iid from some probability distribution on $\bbN_+$ with mean $\mu$.
Then by \cref{theorem:degree:distn}, the limiting degree distribution has power law exponent $\eta_{\text{\tiny\rm 2}} = 1 + \frac{2\mu - \alpha}{2\mu - 1}$. For ${T\not\in\Teven}$, the upper limit of $\eta$ is $\infty$, no matter the value
of $\alpha$; for $T\in\Teven$, one has ${\eta_{\text{\tiny\rm 2}} < 3 - \alpha}$. Hence, if $\alpha>0$, then $\eta_{\text{\tiny\rm 2}} \in (2,3)$, implying that the limiting degree distribution has finite mean, but unbounded variance for any $\mu$.
\section{Examples}
\label{sec:examples}
\begin{figure}
\makebox[\textwidth][c]{
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\begin{scope}
\begin{scope}[xshift=-5cm]
\node (g11) {
\includegraphics[width=4cm,angle=0]{PY_a_0p7_t_5_4_multi.pdf}
};
\end{scope}
\begin{scope}[xshift=-0.45cm]
\node (g12) {
\includegraphics[width=4.15cm,angle=0]{Poisson_a_0p1_l_1_2_multi.pdf}
};
\end{scope}
\begin{scope}[xshift=4cm]
\node (g13) {
\includegraphics[width=4cm,angle=0]{Geometric_a_0p1_b_0p5_2_multi.pdf}
};
\end{scope}
\end{scope}
\end{tikzpicture}
}}
\caption{Examples of ${(\alpha,T)}$-graphs generated using different arrival time distributions. Each graph has $500$ edges. Left: arrival times generated by $\CRP(\alpha,\theta)$, with $\alpha=0.1$, $\theta=5$. Middle: interarrival times are \iid $\Poisson_+(2)$, with $\alpha=0.1$. Right: interarrival times are \iid $\Geom(0.25)$, with $\alpha= 0.5$.}
\label{fig:examples}
\end{figure}
We next discuss several subclasses of $(\alpha,T)$-graphs. One is obtained by fixing all interarrival
times to the same, constant value (\cref{sec:preferential:attachment:trees}). This includes the \Barabasi--Albert
random tree as a special case. Other subclasses can be obtained by imposing exchangeability assumptions.
One is that the vertex assignment variables $L_n$ form an exchangeable sequence, and hence that the
induced random partition ${\Phi^{-1}(G)}$ is exchangeable (\cref{sec:exchangeable:partitions}). This subclass
overlaps with the class of ``edge exchangeable'' graphs \citep{Crane:Dempsey:2016,Cai:etal:2016,Janson:2017aa}.
If the interarrival
times ${\Delta_k}$ are exchangeable (\cref{sec:examples:exch:interarrivals}),
the induced partition is not exchangeable. This case
includes a version of the random graph model of \citep{Aiello:Chung:Lu:2001,Aiello:Chung:Lu:2002,Chung:Lu:2006}.
\subsection{\Barabasi--Albert trees and graphs with constant interarrival time}
\label{sec:preferential:attachment:trees}
The basic \kword{preferential attachment} model popularized by \cite{Barabasi:Albert:1999}
generates a random graph as follows: With parameter ${d\in\mathbb{N}_+}$,
start with any finite connected graph.
At each step, select $d$ vertices in the current graph independently from $P_0$ in \eqref{eq:p:alpha}, add a new
vertex, and connect it to the $d$ selected vertices (multiple connections are allowed).
The \Barabasi--Albert model can be expressed in terms of a sequence $(L_1,L_2,\ldots)$ with given arrival times
as follows:
Start, say, with a graph consisting
of a single vertex and $d$ self-loops. Thus, ${t_k=1}$ and ${L_1=\ldots=L_{2d}=1}$. Each new
vertex requires $2d$ stubs, so ${t_{k+1}=t_k+2d}$. At time ${t_{k+1}-1=(2d)k}$,
just before the ${(k+1)}$-st vertex arrives, the graph
${G_{kd}=(L_n)_{n\leq 2kd}}$ has $k$ vertices and $kd$ edges.
For ${t_k\leq n < t_{k+1}}$, we then set
\begin{equation*}
L_n=k \quad\text{ if } n \text{ odd }
\qquad\text{ and }\qquad
L_n\sim P_0(\argdot;G_{(k-1)d})\quad\text{ if } n \text{ even.}
\end{equation*}
The single vertex with self-loops is chosen as a seed graph here only to keep notation
reasonable. More generally, any graph with $k$ vertices and $n$ edges can be encoded
in the variables ${L_1,\ldots,L_{2n}}$ and the first $k$ arrival times.
When $d=1$, the result is a tree (with a loop at the root). When $d\geq 2$, the above sampling scheme does not produce an $(\alpha,t)$-graph.
However, the following modified sampling scheme produces an $(\alpha,t)$-graph with $\Delta_j = 2d$ for all $j>1$. Start as before with a graph consisting of a single vertex and $d$ self-loops. When the $k$-th vertex arrives at time $t_k=2(k-1)d+1$, set $L_{t_k}=k$ and for $t_k + 1 \leq n < t_{k+1}$, set
\begin{equation*}
L_n\sim P_{\alpha}(\argdot;(L_i)_{i < n}) \;.
\end{equation*}
The modified sampling scheme differs from the basic preferential attachment model in that it updates the degrees after each step, allows loops, and does not require that each vertex begin with $d$ edges. Although the connectivity properties of the resulting graph may be substantially different from the \Barabasi--Albert model, the degree properties are similar to modifications that have been considered by \cite{Berger:etal:2014,Pekoz:Rollin:Ross:2017}. In this case, the results of \cref{sec:deg:dist:fixed} can be strengthened, showing that the scaled degrees converge to random variables that satisfy distributional relationships generalizing those of the beta-gamma algebra \eqref{eq:beta:gamma:algebra}, as discussed by \citet{Dufresne:2010}. (See also \cite{Janson:2010}.) These relationships emerge due to the behavior of $W_{j,k}$ as $k\to\infty$, which separates into two pieces: A deterministic scaling factor, and the random variables that appear below.
\begin{proposition} \label{prop:pa:limits}
Let $G$ be an $(\alpha,t)$-graph with $\alpha\in(-\infty,1)$ such that for some $d\in\bbN_+$, $t_j = 2d(j-1) + 1$ for all $j\geq 1$. Then for every $r\in\bbN_+$,
\begin{align*}
\biggl(\frac{n}{2m}\biggr)^{-\frac{2d-1}{2d-\alpha}} (\deg_j(n))_{1 \leq j \leq r} \xrightarrow[n\to\infty]{\text{\small a.s.}} (\xi_j)_{1 \leq j \leq r} \;.
\end{align*}
The vector of random variables $(\xi_j)_{1\leq j \leq r}$ satisfies the following distributional identities:
Denote $\bar{\alpha}:= 2d-\alpha$, and define
\begin{align*}
Z_r = \prod_{i=1}^{2d-1} \gvar^{1/\bar{\alpha}}_{r+1 - i/\bar{\alpha}} \quad \text{and} \quad Z'_r = \prod_{i=1}^{2d-1} \gvar^{1/\bar{\alpha}}_{1 - i/\bar{\alpha}} \quad \text{and} \quad Z''_r = Z'_r \prod_{k=1}^{r-1} \bvar_{k\bar{\alpha},1-\alpha} \;,
\end{align*}
where all of the random variables are independent of each other and of $(\xi_j)_j$.
Then with $\Psi_1=1$, and ${(\Psi_j)_{j>1}\sim\BetaDist(1-\alpha,(j-1)(2d-\alpha))}$, the following distributional identities hold:
\begin{gather} \label{eq:pa:identity}
Z_r \cdot \bigl( \xi_j \bigr)_{1\leq j \leq r}
\equdist
\gvar_{r\bar{\alpha}} \cdot \bigl( \Psi_j \prod_{i=j+1}^r (1 - \Psi_i) \bigr)_{1\leq j \leq r}
\\
Z'_r \cdot \bigl( \xi_j \bigr)_{1\leq j \leq r}
\equdist
\gvar_{r\bar{\alpha}} \prod_{k=1}^{r} \bvar_{k\bar{\alpha} - 2d + 1,2d-1} \cdot \bigl( \Psi_j \prod_{i=j+1}^r (1 - \Psi_i) \bigr)_{1\leq j \leq r}
\\
Z'_r \cdot \bigl( \xi_j \bigr)_{1\leq j \leq r}
\equdist
\gvar_{r\bar{\alpha} - 2d + 1} \prod_{k=1}^{r-1} \bvar_{k\bar{\alpha} - 2d + 1,2d-1} \cdot \bigl( \Psi_j \prod_{i=j+1}^r (1 - \Psi_i) \bigr)_{1\leq j \leq r}
\\
Z''_r \cdot \bigl( \xi_j \bigr)_{1\leq j \leq r}
\equdist
\gvar_{1-\alpha} \cdot \bigl( \Psi_j \prod_{i=j+1}^r (1 - \Psi_i) \bigr)_{1\leq j \leq r} \;.
\end{gather}
\end{proposition}
\begin{remark}
For a gamma random variable $\gvar_a$, $\gvar_a^{b}$ has so-called generalized gamma distribution, denoted $\GGdist(a/b,1/b)$. Hence, $Z_r$ above is equal to the product of $\GGdist((r+1)\bar{\alpha}-i,\bar{\alpha})$ random variables, and similarly for $Z'_r$. Generalized gamma random variables also appear in the limits of the preferential attachment models in \cite{Pekoz:Rollin:Ross:2017,Pitman:Racz:2015}, and arise in a range of other applications \cite{Pekoz:Rollin:Ross:2016}.
\end{remark}
Results on power law degree distributions in preferential attachment models are numerous \citep[\egc][]{Barabasi:Albert:1999,Aiello:Chung:Lu:2001,Aiello:Chung:Lu:2002,Berger:etal:2014}. It is well-known that the degree distribution of the \Barabasi--Albert tree exhibits a power law with exponent $\eta=3$ \cite{Barabasi:Albert:1999,Bollobas:etal:2001}, which agrees with the following implication of \cref{theorem:degree:distn}.
\begin{corollary}
For the constant interarrival time model considered above, the degree distribution converges to \eqref{eq:degree:distn:linear} with $\gamma = (2d-\alpha)/(2d - 1)$. In particular, the $\alpha$-weighted \Barabasi--Albert tree has power law exponent $\eta = 3 - \alpha$.
\end{corollary}
\subsection{Graphs with exchangeable vertex assignments}
\label{sec:exchangeable:partitions}
Suppose $G$ is a random graph such that the random partition ${\Pi=\Phi^{-1}(G)}$ is exchangeable
(see \cite{Pitman:2006} for more on exchangeable partitions). Equivalently,
the vertex assignments ${L_1,L_2,\ldots}$ are exchangeable, and there is hence
a random probability measure $\mu$ on $\mathbb{N}$ such that
\begin{equation} \label{eq:cond:iid}
L_1,L_2,\ldots\mid\mu\simiid\mu\;.
\end{equation}
We first note an implication of \Cref{theorem:degree:distn}. Recall that a random graph is sparse
if its density \eqref{epsilon:density} is ${\sparsity<2}$.
\begin{corollary}
If a graph generated by an exchangeable partition is sparse
and has a power law degree distribution, then ${\sigma > 1/2}$, and hence ${\eta\in (3/2,2)}$.
\end{corollary}
Recall from \cref{sec:graphs:urns} that an exchangeable Gibbs partition is an exchangeable random partition $\Pi$ such that the probability of any finite restriction $\Pi_n$ can be written as
\begin{align*}
\mathbb{P}(\Pi_n = \{ \deg_1(n),\dots,\deg_k(n) \}) =
V_{n,k} \prod_{j=1}^k \frac{\Gamma(\deg_j(n) - \alpha)}{\Gamma(1 - \alpha)} \;,
\end{align*}
for a suitable sequence of weights $V_{n,k}$.
\citet{Griffiths:Spano:2007} studied the block arrival times (called, in that context, the record indices) of exchangeable Gibbs partitions.
For the random graph induced by such partitions, their results show that
\begin{align} \label{eq:degree:proportions}
\frac{1}{n} \deg_j(n) \xrightarrow[n\to\infty]{\text{\small a.s.}} P_j \equdist \Psi_j \prod_{i = j+1}^{\infty} (1 - \Psi_i) \;,
\end{align}
where $\Psi_j$ is distributed as in \eqref{eq:sb:1}. (This result is also contained in \cref{thm:limiting:degree:sequence}.) They prove that an exchangeable random partition is of Gibbs form if and only if the sequence $(P_j)_{j\geq 1}$ is NTL conditioned on $(T_j)_{j\geq 1}$; this result has implications for some recent network models.
\citet{Crane:Dempsey:2016} and \citet*{Cai:etal:2016}
call a random graph ${((L_1,L_2),\ldots)}$ \emph{edge exchangeable}
if there is some random probability measure $\nu$ on ${\mathbb{N}^2}$ such that
\begin{equation*}
(L_1,L_2),(L_3,L_4),\ldots\mid\nu\simiid\nu\;.
\end{equation*}
\citet{Janson:2017aa} refers to such a graph as being \emph{rank one} if ${\nu=\mu\otimes\mu}$ for some random probability measure on $\mathbb{N}$, which is just
\eqref{eq:cond:iid}. Thus, rank one edge exchangeable graphs are precisely those corresponding to
exchangeable random partitions via $\Phi$. The intersection of edge exchangeable and $(\alpha,T)$-graphs are precisely those $(\alpha,T)$-graphs that
have exchangeable vertex assignments, in which case $\Pi$ is
an exchangeable Gibbs partition. That includes the case ${\Pi\sim\CRP(\alpha,\theta)}$ above, for which \cite{Crane:Dempsey:2016} call $G=\Phi(\Pi)$ the \emph{Hollywood model}.
\begin{proposition}
Let $G$ be a rank one edge exchangeable graph, and let $\mathbf{D}_{\infty}$ be the limiting degree proportions $n^{-1}(\deg_1(n),\deg_2(n),\dotsc)$. Then $\mathbf{D}_{\infty}$ is NTL if and only if $G$ is distributed as a $(\alpha,T)$-graph, where $T$ distributed as in \eqref{eq:egp:arrivals}, in which case $\mathbf{D}_{\infty}$ is distributed as in \eqref{eq:degree:proportions}.
\end{proposition}
The results of \cref{sec:deg:dist:fixed} specialize for $G=\Phi(\Pi)$ where $\Pi$ has law $\CRP(\alpha,\theta)$. In particular, consider conditioning on the first $r$ arrival times, rather than all arrival times. As \cref{prop:crp:limits} shows, the scaled degrees have the same basic structure as in \cref{sec:deg:dist:fixed}, but $\bbE[W_{r,\infty}]$ is captured by a single beta random variable.
\begin{proposition} \label{prop:crp:limits}
Let $G$ be an $(\alpha,T)$-graph for fixed $\alpha\in(0,1)$ such that $T$ are the arrival times induced by a $\CRP(\alpha,\theta)$ partition process \eqref{eq:crp:arrivals}. Then for every $r\in \bbN_+$, conditioned on $T_1,\dotsc,T_r$,
\begin{align*}
n^{-1} (\deg_j(n))_{1\leq j \leq r} \mid T_1,\dotsc,T_r \xrightarrow[n\to\infty]{\text{\small a.s.}} (\xi_1,\dotsc,\xi_r) \;,
\end{align*}
where
\begin{align}
\bigl(\xi_j \bigr)_{1\leq j \leq r} \equdist \bvar_{T_r - r\alpha,\theta + r\alpha} \cdot \bigl(\Psi_j\prod_{i=j+1}^r (1 - \Psi_i)\bigr)_{1\leq j \leq r} \;,
\end{align}
with $\Psi_1=1$, $\Psi_j \sim \BetaDist(1-\alpha,T_j - 1 - (j-1)\alpha)$ and $\bvar_{T_r - r\alpha, \theta + r\alpha}$ mutually independent random variables for $j \geq 1$. This implies the joint distributional identity
\begin{align}
\gvar_{T_r + \theta } \cdot \bigl(\xi_j \bigr)_{1\leq j \leq r} \equdist \gvar_{T_r - r\alpha} \cdot \bigl( \Psi_j\prod_{i=j+1}^r (1 - \Psi_i)\bigr)_{1\leq j \leq r} \;.
\end{align}
and the marginal identities for all $j>1$
\begin{gather*}
\xi_j \equdist \bvar_{1-\alpha,T_j - 1 + \theta + \alpha} \\
\xi_{j+1} \mid \xi_j, \Delta_{j+1} \equdist \xi_j \bvar_{T_j + \theta, \Delta_{j+1}} \;.
\end{gather*}
\end{proposition}
The random variable $\xi_j$ is independent of $j$, given $T_j$.
Among all $(\alpha,T)$-graphs derived from exchangeable Gibbs partitions, this property characterizes those derived from $\CRP(\alpha,\theta)$ partitions, and stems from the arrival time distribution (see \cref{sec:arrival:classification}).
\begin{proposition} \label{prop:crp:marginal}
Let $G$ be an $(\alpha,T)$-graph such that $T$ are the arrival times induced by an exchangeable Gibbs partition $\Pi$. For any $j\geq 1$, the marginal distribution of $\xi_j$ conditioned on $j$th arrival time depends only on $T_j$ if and only if $\Pi$ has law $\CRP(\alpha,\theta)$.
\end{proposition}
\subsection{Graphs with exchangeable interarrival times}
\label{sec:examples:exch:interarrivals}
We next consider $(\alpha,T)$-graphs for which the interarrival times ${\Delta_j = T_j - T_{j-1}}$ are exchangeable.
An immediate consequence of exchangeability is that ${k^{-1}\sum_{j\leq k}\Delta_j\rightarrow\mu}$ almost surely
for some constant ${\mu}$ in ${[1,\infty]}$. \cref{thm:limiting:degree:sequence} implies:
\begin{corollary}
If ${\mu}$ is finite, the limiting degrees scale as $n^{1/\gamma}$ in \eqref{eq:limiting:degree:sequence}, where ${\gamma = (\mu - \alpha)/(\mu - 1)}$. If $\text{Var}(\Delta_j)$ is finite for all $j$, then the degree distribution converges to \eqref{eq:degree:distn:linear}.
\end{corollary}
Stronger results hold when the interarrivals are \iid geometric variables, corresponding to the Yule--Simon
model \cite{Yule:1925,Simon:1955}. Recall that a positive random variable $\mlvar_{\sigma}$ is said to have \emph{Mittag--Leffler distribution} with parameter $\sigma\in(0,1)$ if $\mlvar_{\sigma}=\mathcal{Z}_{\sigma}^{-\sigma}$, where $\mathcal{Z}_{\sigma}$ is a positive $\sigma$-stable random variable, characterized by the Laplace transform $\mathbb{E}[e^{-\lambda\mathcal{Z}}]=e^{-\lambda^{\sigma}}$ and density $f_{\sigma}(z)$. See \cite{Pitman:2006,James:2015aa} for details. Define $\mathcal{Z}_{\sigma,\theta}$ for $\theta > -\sigma$ as a random variable with the polynomially tilted density ${f_{\sigma,\theta}\propto z^{-\theta}f_{\sigma}(z)}$, and let $\mlvar_{\sigma,\theta}=\mathcal{Z}_{\sigma,\theta}^{-\sigma}$. We denote the law of $\mlvar_{\sigma,\theta}$ by $\MittagLeffler(\sigma,\theta)$, which is known as the generalized Mittag--Leffler distribution \citep{Pitman:2003aa,James:2015aa}.
\begin{proposition} \label{prop:ys:limit}
Let $G$ be an $(\alpha,T)$-graph with $\alpha=0$, and $T$ constructed from \iid $\Geom(\beta)$ interarrival times, for $\beta\in(0,1)$. Then for every $r\in\bbN_+$, conditioned on $T_1,\dotsc,T_r$,
\begin{align*}
n^{-(1-\beta)} (\deg_j(n))_{1\leq j \leq r} \mid T_1,\dotsc,T_r
\quad\xrightarrow[n\to\infty]{\text{\small a.s.}}\quad
(\xi_1,\dotsc,\xi_r) \;,
\end{align*}
where
\begin{align}
\bigl(\xi_j \bigr)_{1 \leq j \leq r} \equdist \mlvar_{1-\beta,T_r-1} \bvar_{T_r,(T_r-1)\frac{\beta}{1-\beta}} \cdot \bigl( \Psi_j \prod_{i=j+1}^r (1-\Psi_i) \bigr)_{1 \leq j \leq r} \;,
\end{align}
with $M_{1-\beta,T_r-1}$, $\bvar_{T_r,(T_r-1)\frac{\beta}{1-\beta}}$, $\Psi_1=1$ and $\Psi_j\sim\BetaDist(1,T_j-1)$ mutually independent random variables for $j \geq 1$. This implies the joint distributional identity
\begin{align*}
\gvar_{T_r}^{1-\beta} \cdot \bigl(\xi_j \bigr)_{1 \leq j \leq r} \equdist
\gvar_{T_r} \cdot \bigl( \Psi_j \prod_{i=j+1}^r (1-\Psi_i) \bigr)_{1 \leq j \leq r} \;,
\end{align*}
and the marginal identities for $j>1$
\begin{gather}
\xi_j \equdist \mlvar_{1-\beta} \bvar_{1,T_j-1}^{1-\beta} \\
\xi_{j+1} \mid \xi_j, \Delta_{j+1} \equdist \xi_j \bvar^{1-\beta}_{T_j,\Delta_{j+1}} \\
\xi_j \equdist \mlvar_{1-\beta,T_j-1} \bvar_{T_j,(T_j-1)\frac{\beta}{1-\beta}} \Psi_j \equdist
\mlvar_{1-\beta,T_j-1} \bvar_{1,\frac{T_j - 1}{1-\beta}} \equdist
\mlvar_{1-\beta, T_j} \bvar_{1,\frac{T_j - 1 + \beta}{1-\beta}} \\
\xi_{j+1} \mid \xi_j, \Delta_{j+1} \equdist \xi_j \bvar_{\frac{T_j}{1-\beta},\frac{\Delta_{j+1}}{1-\beta}} \prod_{i=1}^{\Delta_{j+1}} \bvar_{\frac{T_j - 1 + i - \beta}{1-\beta},\frac{\beta}{1-\beta}} \\
\xi_j \gvar_{T_j}^{1-\beta} \equdist \gvar_{1} \;.
\end{gather}
\end{proposition}
\Citet*{Pekoz:Rollin:Ross:2017aa} consider the following two-color P\'{o}lya urn: Let $\Delta_1,\Delta_2,\dotsc$ be drawn \iid from some distribution $P_{\Delta}$, and define $T_j = \sum_{i=1}^j \Delta_j$. Starting with $w$ white balls and $b$ black balls, at each step $n\neq T_j$, a ball is drawn and replaced along with another of the same color. On steps $n=T_j$, a black ball is added to the urn. Of interest is the distribution of the number of white balls in the urn after $n$ steps.
In the language of $(\alpha,T)$-graphs, consider a seed graph $G_{w+b}$ with $k_{w+b} < w + b$ vertices and $w + b$ edges arranged arbitrarily, the only constraint being that there exists a bipartition $\mathbf{V}_w \cup \mathbf{V}_b=\mathbf{V}(G_{w+b})$ so that the total degree of the vertices in $\mathbf{V}_w$ is $D_{w}(w+b) = w$, and of those in $\mathbf{V}_b$ is $D_{b}(w+b)=b$. For $T$ constructed from \iid interarrivals, $D_{w}(n)$ corresponds to the number of white balls after $n$ steps. For interarrivals drawn \iid from the geometric distribution, the following result characterizes the limiting distribution of $D_w(n)$, which was left as an open question by \citet*{Pekoz:Rollin:Ross:2017aa}.
\begin{proposition} \label{prop:immigration:urn}
Let $D_{w}(n)$ be the number of white balls in the P\'{o}lya urn with immigration from {\rm\citep{Pekoz:Rollin:Ross:2017aa}} starting with $w$ white balls and $b$ black balls, where the immigration times have \iid $\Geom(\beta)$ distribution. Then
\begin{align*}
n^{-(1-\beta)}D_{w}(n)
\quad\xrightarrow[n\to\infty]{\text{\small a.s.}}\quad
\xi_{w,w+b}
\;\equdist\;
\bvar_{w,b} \bvar_{w+b,(w+b-1)\frac{\beta}{1-\beta}} \mlvar_{1-\beta,w+b-1} \;,
\end{align*}
which implies the distributional identities
\begin{gather}
\xi_{w,w+b}
\;\equdist\;
\bvar_{w,\frac{(w-1)\beta + b}{1-\beta}} \mlvar_{1-\beta,w+b-1} \\
\xi_{w,w+b}
\;\equdist\;
\bvar_{w,\frac{w\beta + b}{1-\beta}} \mlvar_{1-\beta,w+b} \\
\xi_{w,w+b} \gvar_{w+b}^{1-\beta}
\;\equdist\;
\gvar_w \;.
\end{gather}
\end{proposition}
\begin{table}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccccc}
${\mathbb{P}\braces{n+1\text{ is arrival time}|G_{n}}}$
&
$\Phi^{-1}(G)$ is
&
&
&
\\
depends on
&
$(\alpha,T)$-urn
&
$\mathcal{L}(\Phi^{-1}(G))$
&
$\mathcal{L}(G)$
&
$\mathcal{L}(\Delta_k)$\\
\midrule
$n$
&
yes
&
$\CRP(\theta)$
&
Hollywood model
&
\eqref{eq:crp:arrivals}\\
$n,\#\text{vertices in }G_{n}$
&
yes
&
Gibbs partition
&
$\subset$ rank one edge exch.
&
\eqref{eq:egp:arrivals}\\
$n$, $\#\text{vertices in }G_{n}$, degrees
&
no
&
exch. partition
&
rank one edge exch.
&
-- \\
deterministic
&
yes
&
--
&
PA tree
&
$\delta_2$ \\
independent
&
yes
&
Yule--Simon process
&
ACL \citep{Aiello:Chung:Lu:2001,Aiello:Chung:Lu:2002}
&
$\Geom(\beta)$\\
$n+1 - T_{k(n)}$
&
yes
&
$(\alpha,T)$-urn
&
$(\alpha,T)$-graph
&
\iid\\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Classification of different models according to which statistics of $G_n$ determine the
probability that a new vertex is observed at time $n+1$.}
\label{tab:igor}
\end{table}
\subsection{Classification by arrival time probabilities}
\label{sec:arrival:classification}
\citet*{deBlasi:etal:2015} classify exchangeable partitions according to the quantities on which the probability
of observing a new block in draw ${n+1}$ depends \citep[][Proposition 1]{deBlasi:etal:2015}, conditionally
on the partition observed up to time $n$.
This classification can be translated to random graphs via the induced partition ${\Phi^{-1}(G)}$, and
can be extended further since partitions induced by $(\alpha,T)$-graphs need not be exchangeable:
See \cref{tab:igor}.
One might also consider a sequence of interarrival distributions indexed by the number of vertices, yielding a bespoke generalization of the last row, where the probability of a new vertex depends on $n+1 - T_{k(n)}$ and the number of vertices.
\section*{Acknowledgments} We are grateful to Nathan Ross for helpful comments on the manuscript. BBR is supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007--2013) / ERC grant agreement no. 617071. PO was supported in part by grant FA9550-15-1-0074 of AFOSR.
\bibliography{references}
\bibliographystyle{abbrvnat}
\begin{appendices}
\crefalias{section}{appsec}
\crefalias{subsection}{appsec}
\input{proofs}
\end{appendices}
\end{document} | {"config": "arxiv", "file": "1710.02159/ms.tex"} |
TITLE: Find $\int \frac{e^{2x} dx}{1+e^x} $
QUESTION [0 upvotes]: My attempt. Rewriting it as
$$
\frac{e^x e^x}{1+e^x} \text{ then } u = e^x \text{ and } du = e^x dx
$$
Substitute to get $u/(1+u)$ and now integrate + Plug in
$$
\int \frac{e^{2x}dx}{1+e^x} = \frac{e^{2x}}{2} + x
$$
REPLY [1 votes]: You made a mistake somewhere, possibly by integrating with respect to $x$ rather than $u$, in any case, $$\int\dfrac{u}{1+u}\,\mathrm{d}u=\int\dfrac{u+1}{1+u}\,\mathrm{d}u+\int\dfrac{-1}{1+u}\,\mathrm{d}u=u-\ln(1+u)+\mathrm{C}.$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 1707902} |
TITLE: Limit proof for a function
QUESTION [1 upvotes]: Regarding: If there exists an $L∈R$ such that $lim_{x→a}f(x)=L$ for every $a∈R$, then $f(x)=L$ for every $x∈R$
I would like to make a counter-example for the question.
I chose a function, and want to show that it has a Limit:
$f(x) = \begin{cases}L&x\ne a\\L/2 &x=a\end{cases}$
I started with:
Let $ϵ>0$. We choose $δ= $
Let $x$ such that $0<|x−a|<δ $
So $|f(x)-L|< $
I don't know how to chose δ.
I am looking for something like $|a-x|/2$ , but I am a bit confused...
Any help will be awesome!
REPLY [2 votes]: Since $|x-a|>0$ you know $x\neq a$, hence $f(x)=L$. Thus $|f(x)-L|=0<\epsilon$.
In summary, any choice of $\delta$ is sufficient.
Edit: As Bungo pointed out, this is just the first part of proving the counterexample. You need also to prove $\displaystyle\lim_{x\to b}f(x)=L$ for every $b\in\mathbb R$, not just the special case $b=a$. I believe $\delta=|b-a|$ should work. | {"set_name": "stack_exchange", "score": 1, "question_id": 3934902} |
TITLE: damping factor from eigenvalues
QUESTION [0 upvotes]: I was reading a script about state space representation and at one point it was mentioned that the eigenvalues of the state matrix, in that specific case the complex conjugate pair $\lambda_{1/2}= -0.68 \pm 1.63j $ has a damping factor of about $.4$ .
How can you get the damping factor from a (complex) eigenvalue?
REPLY [2 votes]: I assume you are talking about continous systems.
To get the damping, draw a line from the eigenvalue to the origin. The slope of that line is the (absolute value of the) damping factor.
Or, as formula: given the eigenvalues $\lambda_i = a_i + j b_i$, the damping factors are
$$
D_i = \frac{-a_i}{\sqrt{a_i^2 + b_i^2}} \tag{1}
$$
In your case: $D_1 = \frac{-(-0.68)}{\sqrt{(-0.68)^2 + 1.63^2}} = D_2 = \frac{-(-0.68)}{\sqrt{(-0.68)^2 + (-1.63)^2}} = \frac{0.68}{\sqrt{0.68^2 + 1.63^2}} = 0.385 \approx 0.4$.
If you have a discrete system with sample time $T$, you can first convert the discrete eigenvalues to a continous version from which you can get the damping factors.
The continous counterpart of the discrete eigenvalues $\tilde{\lambda}_i = \tilde{a}_i + j \tilde{b}_i$, are
$$
\lambda_i = a_i + j b_i = \frac{\log(\tilde{\lambda}_i)}{T} = \frac{\log(\tilde{a}_i + j \tilde{b}_i)}{T} \tag{2}
$$
with $\log$ the natural logarithm. Then you can use the same formula $(1)$ as before, just with $(2)$. | {"set_name": "stack_exchange", "score": 0, "question_id": 3458542} |
\section{Special Functions}\label{specialfuctions}
The techniques introduced in this paper to evaluate Schur indices in closed form heavily rely on various families of elliptic functions, in particular Jacobi theta functions, the Eisenstein series, and the family of Weierstrass functions. In this appendix we first collect their definitions and basic properties, and then we list several useful identities in the last subsection.
\subsection{The Weierstrass family}
An elliptic function with respect to the complex structure $\tau$ can be viewed as a meromorphic function on $\mathbb{C}$ with double periodicity
\begin{align}
f(z) = f(z + \tau) = f(z + 1) \ ,
\end{align}
where $\tau \in \mathbb{C}$ with positive imaginary part. One may therefore restrict the domain to be the \emph{fundamental parallelogram} in $\mathbb{C}$ with vertices $0$, $1$, $\tau$, $1 + \tau$. Alternatively, one may view an elliptic function as a meromorphic function on the torus $T^2_\tau$ with complex structure $\tau$. In this appendix and in the main text we often omit the specification of the complex structure $\tau$ in our notations.
One may visualize or construct basic elliptic functions by starting with functions on $\mathbb{C}$ of the form $f(z) \equiv z^{-k}$ and subsequently try to enforce periodicity by summing over all shifts by the periods $1$ and $\tau$, schematically, $P_k(z) \equiv \sum_{m, n} (z - m - n \tau)^{-k}$. After subtracting divergences, one arrives at the following set of (almost) elliptic functions.
\begin{itemize}
\item The Weierstrass $\zeta$-function is defined by
\begin{align}\label{weierstrasszetadef}
\zeta(z) \colonequals \frac{1}{z} + \sum'_{\substack{(m, n) \in \mathbb{Z}^2\\(m, n) \ne (0, 0)}}
\left[\frac{1}{z - m - n \tau} + \frac{1}{m + n \tau} + \frac{z }{(m + n \tau)^2} \right]\ .
\end{align}
In the following and in the main text we will often abbreviate
\begin{align}
\sum'_{\substack{(m, n) \in \mathbb{Z}^2\\(m, n) \ne (0, 0)}} \to \ \ \sum'_{m, n} \ , \qquad \sum_{\substack{m \in \mathbb{Z}\\m \ne 0}} \to \sum_m '\ .
\end{align}
The $\zeta$ function is not quite elliptic, but instead it satisfies
\begin{align}\label{shift-formula-zeta}
\zeta(z + 1 | \tau) - \zeta(z| \tau) = & \ 2\eta_1(\tau)\\
\zeta(z + \tau |\tau) - \zeta(z|\tau) = & \ 2 \eta_2(\tau) \equiv 2\tau \eta_1(\tau) - 2\pi i\ ,
\end{align}
where $\eta_1$ and $\eta_2$ are independent of $z$ and are both related to the Eisenstein series $E_2$. We will come back to this in Appendix \ref{app:usefulidentities}. Note that $\zeta$ has a simple pole at each lattice point $m + n \tau$ with unit residue. The fact that $\zeta$ fails to be fully elliptic is tied to the fact that meromorphic functions on $T^2$ with a single simple pole don't exist. In this sense $\zeta(z)$ is the best one can do in terms of double periodicity.
\item The Weierstrass $\wp$-function
\begin{align}
\wp(z) \colonequals & \ \frac{1}{z^2} + \sum_{(m,n) \ne (0,0)} \left[\frac{1}{(z - m - n \tau)^2} - \frac{1}{(m + n \tau)^2}\right] \ .
\end{align}
This function is elliptic,
\begin{align}
\wp(z) = \wp(z + 1) = \wp(z + \tau) \ .
\end{align}
Following from the simple fact that $\partial_z z^{-1} = - z^{-2}$, one has
\begin{align}
\wp(z) = - \partial_z \zeta(z)\ .
\end{align}
By definition, $\wp$ has only one double pole on $T^2_\tau$.
\item The descendants $\partial_z^n \wp(z)$ are all elliptic functions, all with a single $n + 2$-th order pole on $T^2_\tau$.
\end{itemize}
\subsection{Jacobi theta functions}
The standard Jacobi theta functions are defined as
\begin{align}
\vartheta_1(\mathfrak{z}|\tau) \colonequals & \ -i \sum_{r \in \mathbb{Z} + \frac{1}{2}} (-1)^{r-\frac{1}{2}} e^{2\pi i r \mathfrak{z}} q^{\frac{r^2}{2}} ,
& \vartheta_2(\mathfrak{z}|\tau) \colonequals & \sum_{r \in \mathbb{Z} + \frac{1}{2}} e^{2\pi i r \mathfrak{z}} q^{\frac{r^2}{2}} \ ,\\
\vartheta_3(\mathfrak{z}|\tau) \colonequals & \ \sum_{n \in \mathbb{Z}} e^{2\pi i n \mathfrak{z}} q^{\frac{n^2}{2}},
& \vartheta_4(\mathfrak{z}|\tau) \colonequals & \sum_{n \in \mathbb{Z}} (-1)^n e^{2\pi i n \mathfrak{z}} q^{\frac{n^2}{2}} \ .
\end{align}
In the main text and these appendices we will often omit $|\tau$ in the notation. It is well-known that the Jacobi-theta functions can be rewritten as triple product of the $q$-Pochhammer symbol, for example,
\begin{align}\label{theta1-product-formula}
\vartheta_1(\mathfrak{z}) = - i z^{\frac{1}{2}}q^{\frac{1}{8}}(q;q)(zq;q)(z^{-1};q) \ ,\qquad (z;q) \colonequals \prod_{k = 0}^{+\infty}(1 - zq) \ .
\end{align}
The functions $\vartheta_i(z)$ behave nicely under full-period shifts,
\begin{align}
\vartheta_{1,2}(\mathfrak{z} + 1) = & - \vartheta_{1,2}(\mathfrak{z}) , &
\vartheta_{3,4}(\mathfrak{z} + 1) = & + \vartheta_{3,4}(\mathfrak{z}) , & \\
\vartheta_{1,4}(\mathfrak{z} + \tau) = & - \lambda \vartheta_{1,4}(\mathfrak{z}), &
\vartheta_{2,3}(\mathfrak{z} + \tau) = & + \lambda \vartheta_{2,3}(\mathfrak{z}) , &
\end{align}
where $\lambda \equiv e^{-2\pi i \mathfrak{z}}e^{- \pi i \tau}$. In particular, one can derive
\begin{align}
\vartheta_1(\mathfrak{z} + m \tau + n) = (-1)^{m + n} e^{-2\pi i m \mathfrak{z}} q^{ - \frac{1}{2}m^2}\vartheta_1(\mathfrak{z})\ .
\end{align}
Moreover, the four Jacobi theta functions are related by half-period shifts which can be summarized as in the following diagram,
\begin{center}
\includegraphics[height=100pt]{figures/theta-half-shifts.pdf}
\end{center}
where $\mu = e^{- \pi i \mathfrak{z}} e^{- \frac{\pi i}{4}}$, and $f \xrightarrow{a} g$ means
\begin{align}
\text{either}\qquad f\left(\mathfrak{z} + \frac{1}{2}\right) = a g(\mathfrak{z}) \qquad \text{or} \qquad
f\left(\mathfrak{z} + \frac{\tau}{2}\right) = a g(\mathfrak{z}) \ ,
\end{align}
depending on whether the arrow is horizontal or (slanted) vertical respectively.
The functions $\vartheta_i(z | \tau)$ transform nicely under the modular $S$ and $T$ transformations, which act, as usual, on the nome and flavor fugacity as $(\frac{\mathfrak{z}}{\tau}, - \frac{1}{\tau})\xleftarrow{~~S~~}(\mathfrak{z}, \tau) \xrightarrow{~~T~~} (\mathfrak{z}, \tau + 1).$ In summary
\begin{center}
\includegraphics[height=0.2\textheight]{figures/STtheta.pdf}
\end{center}
where $\alpha = \sqrt{-i \tau}e^{\frac{\pi i z^2}{\tau}}$.
The $\tau$-derivative of the Jacobi theta functions is related to the double $z$-derivative as
\begin{align}
4\pi i \partial_\tau \vartheta_i(z|\tau) = \vartheta''_i(z|\tau)\ .
\end{align}
Finally, we will frequently encounter residues of the $\vartheta$ functions. In particular,
\begin{align}\label{theta-function-residue}
\mathop{\operatorname{Res}}\limits_{a \to b^{\frac{1}{n}}q^{\frac{k}{n} + \frac{1}{2n}}e^{2\pi i \frac{\ell}{n}}} \frac{1}{a} \frac{1}{\vartheta_4(n\mathfrak{a} - \mathfrak{b})} = & \ \frac{1}{n} \frac{1}{(q;q)^3} (-1)^k q^{\frac{1}{2} k (k + 1)} \ , \\
\mathop{\operatorname{Res}}\limits_{a \to b^{\frac{1}{n}}q^{\frac{k}{n}}e^{2\pi i \frac{\ell}{n}}} \frac{1}{a} \frac{1}{\vartheta_1(n\mathfrak{a} - \mathfrak{b})} = & \ \frac{1}{n} \frac{i }{\eta(\tau)^3} (-1)^{k + \ell} q^{\frac{1}{2}k^2}\ .
\end{align}
Note that the $(-1)^\ell$ in the second line is related to the presence of a branch point at $z = 0$ according to (\ref{theta1-product-formula}). Let us quickly derive the second formula,
\begin{align}
\mathop{\operatorname{Res}}\limits_{a \to b^{\frac{1}{n}}q^{\frac{k}{n}}e^{2\pi i \frac{\ell}{n}}} \frac{1}{a} \frac{1}{\vartheta_1(n\mathfrak{a} - \mathfrak{b})} \nonumber
\colonequals & \ \oint_{b^{\frac{1}{n}}q^{\frac{k}{n}}e^{2\pi i \frac{\ell}{n}}} \frac{da}{2\pi i a} \frac{1}{\vartheta_1(n \mathfrak{a} - \mathfrak{b})}
= \oint_{1} \frac{dz}{2\pi i z} \frac{1}{\vartheta_1(n \mathfrak{z} + k \tau + \ell)}\nonumber\\
= & \ \oint_{1} \frac{dz}{2\pi i z} \frac{(-1)^{k + \ell}z^{nk} q^{\frac{1}{2}k^2}}{\vartheta_1(n \mathfrak{z})}\nonumber\\
= & \ \oint_{1} \frac{dz}{2\pi i z} \frac{(-1)^{k + \ell}z^{nk} q^{\frac{1}{2}k^2}}{-i q^{\frac{1}{8}} z^{\frac{1}{2}} (q;q) (z^nq;q)(z^{-n}q;q)(1 - z^{-n})} \nonumber\\
= & \ \frac{1}{n}\frac{i}{\eta(\tau)^3} (-1)^{k + \ell}q^{\frac{k^2}{2}}\ .
\end{align}
Here we used the shift property of $\vartheta_1$ and \eqref{theta1-product-formula}.
\subsection{Eisenstein series}
The twisted Eisenstein series are defined as
\begin{align}
E_{k \ge 1}\left[\begin{matrix}
\phi \\ \theta
\end{matrix}\right] \colonequals & \ - \frac{B_k(\lambda)}{k!} \\
& \ + \frac{1}{(k-1)!}\sum_{r \ge 0}' \frac{(r + \lambda)^{k - 1}\theta^{-1} q^{r + \lambda}}{1 - \theta^{-1}q^{r + \lambda}}
+ \frac{(-1)^k}{(k-1)!}\sum_{r \ge 1} \frac{(r - \lambda)^{k - 1}\theta q^{r - \lambda}}{1 - \theta q^{r - \lambda}} \ ,
\end{align}
where $\phi \equiv e^{2\pi i \lambda}$ with $0 \le \lambda < 1$, $B_k(x)$ denotes the $k$-th Bernoulli polynomial, and the prime in the sum indicates that the $r = 0$ should be omitted when $\phi = \theta = 1$. Additionally, we also define
\begin{align}
E_0\left[\begin{matrix}
\phi \\ \theta
\end{matrix}\right] = -1 \ .
\end{align}
When $k = 2n$ is even, the $\theta = \phi = 1$ limit reproduces the usual Eisenstein series $E_{2n}$, while when $k$ is odd, $\theta = \phi = 1$ is a vanishing limit except for $k = 1$ where it is singular,\footnote{See appendix \ref{app:usefulidentities}.}
\begin{align}
E_{2n}\left[\begin{matrix}
+1 \\ +1
\end{matrix}\right] = E_{2n} \ , \qquad E_1\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] = \frac{1}{2\pi i }\frac{\vartheta'_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})}, \qquad
E_{2n + 1 \ge 3}\left[\begin{matrix}
+1 \\ +1
\end{matrix}\right] = 0 \ .
\end{align}
As a result, among all the $E_k\big[\substack{\pm 1 \\ z}\big]$, only $E_1\big[{\substack{\pm 1 \\ z}}\big]$ has a pole at $z = 1$.
A closely related property is the symmetry of the Eisenstein series
\begin{align}\label{Eisenstein-symmetry}
E_k\left[\begin{matrix}
\pm 1 \\ z^{-1}
\end{matrix}\right] = (-1)^k E_k\left[\begin{matrix}
\pm 1 \\ z
\end{matrix}\right] \ .
\end{align}
The twisted Eisenstein series of neighboring weights are related by
\begin{align}\label{EisensteinDerivative}
q \partial_q E_k\left[\begin{matrix}
\phi \\ b
\end{matrix}
\right] = (- k) b \partial_b E_{k + 1}\left[\begin{matrix}
\phi \\ b
\end{matrix}
\right]\ .
\end{align}
When shifting the argument $\mathfrak{z}$ of the Eisenstein series by several half or full periods $\tau$, one has for any non-zero $n \in \mathbb{Z}$
\begin{align}\label{Eisenstein-shift}
E_k\left[\begin{matrix}
\pm 1\\ z q^{\frac{n}{2}}
\end{matrix}\right]
=
\sum_{\ell = 0}^{k} \left(\frac{n}{2}\right)^\ell \frac{1}{\ell !}
E_{k - \ell}\left[\begin{matrix}
(-1)^n(\pm 1) \\ z
\end{matrix}\right] \ .
\end{align}
To prove these equalities recursively, one can start with the identification (\ref{EisensteinToTheta}) between Eisenstein series and the Jacobi-theta functions, where the periodicity of the latter is clear, and then apply (\ref{EisensteinDerivative}). A similar discussion can also be found in \cite{2012arXiv1209.5628O,Krauel:2013lra}. A natural consequence is that\footnote{In fact, these equalities remain true even after replacing $1$ by $e^{2\pi i \lambda}$ and $- 1$ by $e^{2\pi i (\lambda + \frac{1}{2})}$.}
\begin{align}\label{Eisenstein-shift-1}
\Delta_k \left[\begin{matrix}
\pm 1 \\ z
\end{matrix}\right]
\equiv E_k\left[\begin{matrix}
\pm 1 \\ zq^{\frac{1}{2}}
\end{matrix}\right]
- E_k\left[\begin{matrix}
\pm 1 \\ zq^{ - \frac{1}{2}}
\end{matrix}\right]
= & \ \sum_{m = 0}^{\floor{\frac{k - 1}{2}}} \frac{1}{2^{2m}(2m+1)!}E_{k - 1 - 2m}\left[\begin{matrix}
\mp 1\\z
\end{matrix}\right] \ ,
\end{align}
or more generally
\begin{align}
E_k\left[\begin{matrix}
\pm 1 \\ zq^{\frac{1}{2} + n}
\end{matrix}\right]
- E_k\left[\begin{matrix}
\pm 1 \\ zq^{ - \frac{1}{2} - n}
\end{matrix}\right]
= & \ 2\sum_{m = 0}^{\floor{\frac{k - 1}{2}}} \left(\frac{2n+1}{2}\right)^{2m + 1}\frac{1}{(2m+1)!}E_{k - 1 - 2m}\left[\begin{matrix}
\mp 1\\z
\end{matrix}\right] \ .
\end{align}
The Eisenstein series are often reorganized into twisted Elliptic-$P$ functions, generalizing the Weierstrass $\wp$-family. In particular \cite{Mason:2008zzb},
\begin{align}\label{P1}
P_{k = 1}\left[\begin{matrix}
\phi \\ \theta
\end{matrix}\right](y) \colonequals - \frac{1}{y}\sum_{m \ge 0}E_m\left[\begin{matrix}
\phi \\ \theta
\end{matrix}\right] y^m \ ,
\end{align}
while the remaining twisted-$P_k$ with higher $k$ are obtained by taking $y$-derivatives. In particular, we will later use
\begin{align}\label{P2}
P_2(y) \colonequals - \sum_{n = 1}^{\infty} \frac{1}{2n} E_{2n}(\tau)y^{2n} \ ,
\end{align}
whose derivative reproduces $P_1\big[\substack{+ 1 \\ + 1}\big](y)$ up to a $y^{-1}$ term.
With $P$, the difference equations can be further reorganized into the more compact formula
\begin{align}\label{Delta-Eisenstein}
\Delta_k \left[\begin{matrix}
\pm 1 \\ z
\end{matrix}\right]
= - 2\oint_0 \frac{dy}{2\pi i} \frac{1}{y^k} \sinh \left(\frac{y}{2}\right) P_1\left[\begin{matrix}
\mp 1 \\ z
\end{matrix}\right](y) \ .
\end{align}
where the $y$-contour goes around the origin. Conversely, the individual twisted Eisenstein series can be rewritten in terms of the above differences $\Delta_k$. Let us define $\mathcal{S}_\ell$ by
\begin{align}\label{S2k}
\frac{1}{2}\frac{y}{\sinh \frac{y}{2}}
\equiv \sum_{\ell \ge 0} \mathcal{S}_\ell\, y^\ell .
\end{align}
It is straightforward to show that
\begin{align}\label{Eisenstein-from-Delta}
E_k\left[\begin{matrix}
\pm 1 \\ z
\end{matrix}\right]
= \sum_{\ell = 0}^{k} \mathcal{S}_\ell\, \Delta_{k - \ell + 1}\left[\begin{matrix}
\mp 1 \\ z
\end{matrix}\right]\ .
\end{align}
\subsubsection{Constant terms}
The constant terms in $z$ of the Eisenstein series play an important role in the main text when writing down the integration formulas. These numbers are given by
\begin{align}\label{const-terms}
& \ \text{const. term of }E_{2n + 1}\left[\begin{matrix}
\pm 1 \\ z
\end{matrix}\right] = 0\ , \quad \text{ except for } \quad \text{const. term of } E_1\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] = - \frac{1}{2} \ , \nonumber\\
& \ \text{const. term of }E_{2n}\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] = - \frac{B_{2n}}{(2n)!} = - \bigg[\frac{y}{2}\coth \frac{y}{2}\bigg]_{2n} \ , \\
& \ \text{const. term of }E_{2n}\left[\begin{matrix}
- 1 \\ z
\end{matrix}\right] = - \mathcal{S}_{2n} = - \bigg[\frac{y}{2}\frac{1}{\sinh \frac{y}{2}}\bigg]_{2n}\ , \nonumber
\end{align}
and their differences are given by
\begin{align}\label{D2k}
\mathcal{D}_{2n} \equiv \mathcal{S}_{2n} - \frac{B_{2n}}{(2n)!} = \text{const. term of } \left(E_{2n}\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right]
- E_{2n}\left[\begin{matrix}
- 1 \\ z
\end{matrix}\right]
\right) = \left[- \frac{y}{2} \tanh \frac{y}{4}\right]_{2n} \ .
\end{align}
In the above, $[f(y)]_k$ denotes the $k$-th coefficient of the Tayler expansion in $y$ around $y = 0$, and $B_{2n}$ are simply the Bernoulli numbers. For the reader's convenience, we collect here the first few values of these numbers,
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c}
$n=$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\
\hline
$\frac{1}{(2n)!}B_{2n}$ & $\frac{1}{12}$ & $- \frac{1}{720}$ & $\frac{1}{30240}$ & $- \frac{1}{1209600}$ & $\frac{1}{47900160}$ & $ - \frac{691}{1307674368000}$ \\
$\mathcal{S}_{2n}$ & $-\frac{1}{24}$ & $\frac{7}{5760}$ & $- \frac{31}{967680}$ & $\frac{127}{154828800}$ & $- \frac{73}{3503554560}$ & $\frac{1414477}{2678117105664000}$\\
$\mathcal{D}_{2n}$ & $- \frac{1}{8}$ & $\frac{1}{384}$ & $- \frac{1}{15360}$ & $\frac{17}{10321920}$ & $ - \frac{31}{743178240}$ & $\frac{691}{653996851200}$
\end{tabular}
\end{center}
\subsection{Useful identities}\label{app:usefulidentities}
The Jacobi theta functions satisfy a collection of \emph{duplication formulas}, for example,
\begin{align}\label{duplication}
\vartheta_1(2 \mathfrak{z})\vartheta_1'(0)
= & \ 2\pi\prod_{i = 1}^{4}\vartheta_i(\mathfrak{z})
= \pi \vartheta_1(2 \mathfrak{z}) \prod_{i = 2}^{4}\vartheta_i(0) \ ,\\
\vartheta_4(2 \mathfrak{z}) \vartheta_4(0)^3
= & \ \vartheta_4(\mathfrak{z})^4 - \vartheta_1(\mathfrak{z})^4
= \vartheta_3(\mathfrak{z})^4 - \vartheta_2(\mathfrak{z})^4 \ .
\end{align}
The $\mathfrak{z} \to 0$ limit of the first line gives the well-known identity $\vartheta'_1(0) = \pi \vartheta_2(0)\vartheta_3(0)\vartheta_4(0)$. The derivatives of $\vartheta_i$ satisfy, among a few other relations,
\begin{align}\label{theta-derivative-formula}
\frac{d}{d \mathfrak{z}} \left[\frac{\vartheta_1(\mathfrak{z})}{\vartheta_4(\mathfrak{z})}\right] = \vartheta_4(0)^2 \frac{\vartheta_2(\mathfrak{z})\vartheta_3(\mathfrak{z})}{\vartheta_4(\mathfrak{z})^2} \quad \Rightarrow
\quad
\frac{\vartheta'_4(\mathfrak{z})}{\vartheta_4(\mathfrak{z})}
- \frac{\vartheta'_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})}
= - \pi \vartheta_4(0)^2 \frac{\vartheta_2(\mathfrak{z})\vartheta_3(\mathfrak{z})}{\vartheta_1(\mathfrak{z})\vartheta_4(\mathfrak{z})}\ .
\end{align}
One can express both the Weierstrass family and the Eisenstein series in terms of the Jacobi theta functions. For example,
\begin{align}\label{zeta-thetap}
\zeta(\mathfrak{z}) = \frac{\vartheta'_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} - 4\pi^2 \mathfrak{z} E_2 \ .
\end{align}
The quasi-periodicity of $\zeta$ now follows and one can express the $\eta_i(\tau)$ in \eqref{shift-formula-zeta} as
\begin{align}
\eta_1(\tau) = - 2\pi^2 E_2, \qquad \eta_2(\tau) = \tau \eta_1(\tau) - \pi i \ .
\end{align}
The schematic relation between the Eisenstein series and the Jacob-theta functions can be summarized in the diagram
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/eisenstein-theta.pdf}
\end{center}
In more details, the Eisenstein series can be rewritten in terms of ratios of $\vartheta$ functions and their derivatives,
\begin{align}\label{EisensteinToTheta}
E_k\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] = - \left[e^{ - \frac{y}{2\pi i }\mathcal{D}_\mathfrak{z} - P_2(y) }\right]_k \vartheta_1(\mathfrak{z})
\end{align}
where $P_2$ is a Weierstrass elliptic-$P$ function (\ref{P2}), $[f(y)]_k$ denotes the $k$-th coefficient of the Taylor series of $f(y)$ around $y=0$, and we define an abstract differential operators $\mathcal{D}_\mathfrak{z}^n$ by
\begin{align}
\underbrace{\mathcal{D}_\mathfrak{z} \ldots \mathcal{D}_\mathfrak{z}}_{n \text{ copies}} \vartheta_i(\mathfrak{z}) = \mathcal{D}_\mathfrak{z}^n \vartheta_i(\mathfrak{z}) \equiv \frac{\vartheta^{(n)}_i(\mathfrak{z})}{\vartheta_i(\mathfrak{z})} \ .
\end{align}
More explicitly, we have
\begin{align}\label{EisensteinToTheta-2}
E_k\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] = \sum_{\ell = 0}^{\floor{k/2}} \frac{(-1)^{k + 1}}{(k - 2\ell)!}\left(\frac{1}{2\pi i}\right)^{k - 2\ell} \mathbb{E}_{2\ell} \frac{\vartheta_1^{(k - 2\ell)}(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} \ ,
\end{align}
where we define
\begin{align}\label{Ebold}
& \mathbb{E}_{2} \colonequals E_2, \qquad \mathbb{E}_4 \colonequals E_4 + \frac{1}{2}(E_2)^2, \qquad
\mathbb{E}_6 \colonequals E_6 + \frac{3}{4}E_4E_2 + \frac{1}{8}(E_2)^3 \ , \qquad \ldots\\
& \mathbb{E}_{2\ell} \colonequals \sum_{\substack{\{n_p\} \\ \sum_{p \ge 1} (2p)n_p = 2\ell}} \prod_{p\ge 1} \frac{1}{n_p !} \left(\frac{1}{2p}E_{2p}\right)^{n_p}\ .
\end{align}
The conversion from $E_k\left[\substack{- 1 \\ \pm z}\right]$ can be obtained by replacing $\vartheta_1$ with $\vartheta_{2,3,4}$ according to the previous diagram. One can show these relations by observing that both sides satisfy the same difference equations. (Those of the Eisenstein series have been discussed in the previous subsection.) For the reader's convenience we list the first few conversions here.
\begin{align}\label{Ek-thetap}
E_1\left[\begin{matrix}
+1 \\ z
\end{matrix}
\right] = & \ \frac{1}{2\pi i} \frac{\vartheta'_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})}\ , \\
E_2\left[\begin{matrix}
+1 \\ z
\end{matrix}
\right] = & \ \frac{1}{8\pi^2}\frac{\vartheta_1''(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} - \frac{1}{2} E_2 \ , \\
E_3\left[\begin{matrix}
+1 \\ z
\end{matrix}
\right] = & \ \frac{i}{48\pi^3} \frac{\vartheta'''_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})}
- \frac{i}{4\pi}\frac{\vartheta'_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} E_2, \\
E_4\left[\begin{matrix}
+1 \\ z
\end{matrix}\right] = & \ - \frac{1}{384\pi^4} \frac{\vartheta''''_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} + \frac{1}{16\pi^2}E_2 \frac{\vartheta''_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} - \frac{1}{4} \left(E_4 + \frac{1}{2}(E_2)^2\right) \\
E_5\left[\begin{matrix}
+1 \\ z
\end{matrix}\right]
= & \ - \frac{i}{3840 \pi^5} \frac{\vartheta^{(5)}_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} + \frac{i}{96\pi^3}E_2 \frac{\vartheta_1^{(3)}(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} - \frac{i}{8\pi}\left(E_4 + \frac{1}{2}(E_2)^2\right)\frac{\vartheta_1'(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} \\
E_6\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right]
= & \ \frac{1}{46080\pi^6} \frac{\vartheta^{(6)}_1(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} - \frac{1}{768\pi^4}E_2 \frac{\vartheta_1^{(4)}(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} + \frac{1}{32\pi^2} \left(E_4 + \frac{1}{2}(E_2)^2\right) \frac{\vartheta_1^{(2)}(\mathfrak{z})}{\vartheta_1(\mathfrak{z})} \nonumber\\
& \ - \frac{1}{6}\left(E_6 + \frac{3}{4}E_4 E_2 + \frac{1}{8}E_2^3\right) \ .
\end{align}
From the above conversion one computes the residues of Eisenstein series,
\begin{align}
\mathop{\operatorname{Res}}_{z \to 1}\frac{1}{z}E_k\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] = \delta_{k1} \ ,
\qquad
\mathop{\operatorname{Res}}_{z \to q^{\frac{1}{2} + n}}\frac{1}{z}E_k\left[\begin{matrix}
- 1 \\ z
\end{matrix}\right] = \frac{1}{2^{k - 1} (k - 1)!} \ .
\end{align}
Moreover, the Eisenstein series satisfy the following relations
\begin{align}\label{duplication-Eisenstein}
\sum_{\pm}E_k\left[\begin{matrix}
\phi \\ \pm z
\end{matrix}\right](\tau) = & \ 2 E_k\left[\begin{matrix}
\phi \\ z^2
\end{matrix}\right](2\tau) \ , \nonumber \\
\sum_{\pm} \pm E_k\left[\begin{matrix}
\phi \\ \pm z
\end{matrix}\right](\tau)
= & \ -2 E_k\left[\begin{matrix}
\phi \\ z^2
\end{matrix}\right](2\tau)
+ 2 E_k\left[\begin{matrix}
\phi \\ z
\end{matrix}\right](\tau)\ , \nonumber
\\
E_k\left[\begin{matrix}
+ 1\\z
\end{matrix}\right](2\tau)
+ E_k\left[\begin{matrix}
- 1\\z
\end{matrix}\right](2\tau) = & \
\frac{2}{2^k}E_k\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right] \ ,\\
E_k\left[\begin{matrix}
+ 1\\z
\end{matrix}\right](2\tau)
- E_k\left[\begin{matrix}
- 1\\z
\end{matrix}\right](2\tau) = & \
- \frac{2}{2^k}E_k\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right](\tau)
+ 2 E_k\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right](2\tau)\ , \nonumber
\\
\sum_{\pm \pm} E_k\left[\begin{matrix}
\pm 1 \\ \pm z
\end{matrix}\right](\tau) = & \ \frac{4}{2^k}E_k\left[
\begin{matrix}
+ 1 \\ z^2
\end{matrix}\right](\tau)\ . \nonumber
\end{align}
Applying the shift $z \to z q^{\frac{1}{2}}$, one can also generate similar formulas with $E_1\Big[\substack{ \pm 1 \\ *}\Big] \rightarrow E_1\Big[\substack{ \mp 1 \\ *}\Big]$. These formulas are generalizations of the duplication formulas, for instance, the last identity at $k = 1$ reduces to the duplication formula (\ref{duplication}). Combining the duplication formulas and (\ref{theta-derivative-formula}), one finds the useful identity
\begin{align}\label{Eisenstein-identity-1}
E_1\left[\begin{matrix}
+ 1 \\ z
\end{matrix}\right]
- E_1\left[\begin{matrix}
- 1 \\ z
\end{matrix}\right]
= \frac{\eta(\tau)^3}{2i} \frac{\vartheta_1(2 \mathfrak{z})\vartheta_4(0)^2}{\vartheta_1(\mathfrak{z})^2 \vartheta_4(\mathfrak{z})^2}\ .
\end{align} | {"config": "arxiv", "file": "2112.09705/appendices/specialfunctions.tex"} |
\begin{document}
\title{Fixed Points for Stochastic Open Chemical Systems}
\author{V.A.~Malyshev}
\maketitle
\begin{abstract}
In the first part of this paper we give a short review of the hierarchy
of stochastic models, related to physical chemistry. In the basement
of this hierarchy there are two models --- stochastic chemical kinetics
and the Kac model for Boltzman equation. Classical chemical kinetics
and chemical thermodynamics are obtained as some scaling limits in
the models, introduced below. In the second part of this paper we
specify some simple class of open chemical reaction systems, where
one can still prove the existence of attracting fixed points. For
example, Michaelis\tire Menten kinetics belongs to this class. At
the end we present a simplest possible model of the biological network.
It is a network of networks (of closed chemical reaction systems,
called compartments), so that the only source of nonreversibility
is the matter exchange (transport) with the environment and between
the compartments.
\end{abstract}
Keywords: chemical kinetics, chemical thermodynamics, Kac model, mathematical
biology
\section{Introduction}
Relation between the existing (mathematical) physical theory and future
mathematical biology seems to be very intimate. For example, equilibrium
is a common state in physics but in biology equilibrium means death.
Biology should be deeply dynamical but this goal seems unreachable
in full extent: even in simplest physical situations the time consuming
complexity of any study of local dynamics is out of the present state
of art. Thus the only possibility would be to consider simpler dynamical
models (mean field etc.) but to go farther in their structure. The
obvious first step should have been related to chemical kinetics and
chemical thermodynamics. Here we present a review of these first results
and discuss what should be the second step.
In the first part of this paper we give a short review (in more general
terms than in \cite{Mal3}) of the hierarchy of stochastic models,
related to physical chemistry. In the basement of this hierarchy there
are two models --- stochastic chemical kinetics and the Kac model
for Boltzman equation. Classical chemical kinetics and chemical thermodynamics
are obtained as some scaling limits in the models, introduced below.
If some physical conditions, as reversibility, are assumed for a closed
(without matter exchange) system, then we have sufficiently simple
behaviour: one can prove convergence to a fixed point. However, in
many models of physical chemistry and biology, no reversibility condition
is assumed, and the behaviour can be as complicated as one can imagine.
Here we have already some gap between physics and biology, and it
is necessary to fill in this gap. In the second part of this paper
we specify some simple class of open chemical reaction systems, where
one can still prove the existence of attracting fixed points. For
example, Michaelis\tire Menten kinetics belongs to this class. At
the end we present a simplest possible model of the biological network.
It is a network of networks (of closed chemical reaction systems,
called compartments), so that the only source of nonreversibility
is the matter exchange (transport) with the environment and between
the compartments.
\section{Microdynamics}
Any molecule of mass $m$ can be characterized by translational degrees
of freedom (velocity $v\in\R^{3}$, coordinate $x\in\R^{3}$) and
internal, or chemical (for example, rotational and vibrational) degrees
of freedom. Internal degrees of freedom include the type $j=1,\ldots,J$
of the molecule and internal energy functionals $K_{j}(z_{j}),z_{j}\in\mathbf{K}_{j}$,
in the space $\mathbf{K}_{j}$ of internal degrees of freedom. It
is often assumed, see \cite{LanLif}, that the total energy of the
molecule $i$ is\[
E_{i}=T_{i}+K_{j}(z_{j,i}).\]
We consider here the simplest choice when $K_{j}$ is the fixed nonnegative
number, depending only on $j$. It can be interpreted as the energy
of some chemical bonds.
We consider the set $\mathbf{X}$ of countable locally finite configurations
$X\!\!=\!\{x_{i},v_{i},j_{i}\}$ of particles (molecules) in $\R^{3}$,
where each particle $i$ has a coordinate $x_{i}$, velocity $v_{i}$
and type $j_{i}$. Denote by $\mathfrak{M}$ the system of all probability
measures on $\mathbf{X}$ with the following properties:
\begin{itemize}
\item Coordinates of these particles are distributed as the homogeneous
Poisson point field of particles on $\R^{3}$ with some density $c$.
\item The vectors $(v_{i},j_{i})$ are independent of the space coordinates
and of the other particles. The velocity $v$ of a particle is assumed
to be uniformly distributed on the sphere with the radius defined
by the kinetic energy $T={m}(v_{1}^{2}+v_{2}^{2}+v_{3}^{2})/2$ of
the particle, and the pairs $(j_{i},T_{i})$ are distributed via some
common density $p(j,T),$ \[
\sum_{j}\int p(j,T)\, dT=1.\]
\end{itemize}
Our first goal will be to define random dynamics on $\mathbf{X}$
(or deterministic dynamics on $\mathfrak{M}$). It is defined by a
probability space $(\mathbf{X}^{0,\infty},\mu)$, where $\mu=\mu^{0,\infty}$
is a probability measure on the set $\mathbf{X}^{0,\infty}$ of countable
arrays $X^{0,\infty}(t)=\{x_{i}(t),v_{i}(t),j_{i}(t)\}$ of trajectories
$x_{i}(t),v_{i}(t),j_{i}(t)$ on intervals $I_{i}=(\tau_{i},\eta_{i})$,
where $0\leq\tau_{i}<\eta_{i}\leq\infty$. The measure $\mu$ belongs
to the set of measures $\mathfrak{M}^{0,\infty}$ on $X^{0,\infty}(t)$,
defined by the following properties:
\begin{itemize}
\item If for any fixed $0\leq t<\infty$ we denote by $\mu(t)$ the measure
induced by $\mu$ on $\mathbf{X}$, then $\mu(t)\in\mathbf{\mathfrak{M}}$.
\item The trajectories $x_{i}(t),v_{i}(t),j_{i}(t)$ are independent, each
of them is a Markov process (not necessary time homogeneous). This
process is defined by initial measure $\mu(0)$ on $\mathbf{X}$,
by birth and death rates, defining time moments $\tau_{i},\eta_{i}$,
and by transition probabilities at time $t$, independent of the motion
of individual particles but depending on the concentration densities
$c_{t}(j,T)$ at time $t$.
\item The evolution of the pair $(j,T)$ for the individual particle in-between
the birth and death moments is defined by the following Kolmogorov
equations, which control the one-particle process \begin{equation}
\frac{\partial p_{t}(j_{1},T_{1})}{\partial t}=\sum_{j}\!\int\!(P(t;j_{1},T_{1}|j,T)\, p_{t}(j,T)-P(t;j,T|j_{1},T_{1})p_{t}(j_{1},T_{1}))\, dT\label{kol_1}\end{equation}
defining Markov process with distributions $p_{t}(j,T)$. The probability
kernel $P$ depends however on $p_{t}(j,T)$ itself, we shall make
it precise below.
\end{itemize}
The dynamics we will describe here is based on some earlier mathematical
models and central dogmas of physical chemistry. The simplest way
to rigorously introduce the measure $\mu$ is by the limit of finite
volume random dynamics. Initial conditions for this dynamics are as
follows: at time $0$ some number $n^{(\Lambda)}(0)$ of molecules
are thrown uniformly in the cube $\Lambda$, their parameters $(j,T)$
are independent and have some common density $p_{0}(j,T)$, not depending
on $\Lambda$. Let $n_{j}(t)=n_{j}^{(\Lambda)}(t)$ be the number
of type $j$ molecules at time $t$.
\medskip{}
\noindent \underline{\sl Input-Output (I/O) processes}
\smallskip{}
Heuristically, our time scale is such that for the unit of time each
molecule does $O(1)$ transitions. Then for any macroquantity $q\textrm{ }$of
substance its $O(q)$ part may change. One should choose time scales
for input-output processes correspondingly.
The (output) rate of the jumps $n_{j}\rightarrow n_{j}-1$ is denoted
by $\lambda_{j}^{(0)}$, that is with this rate a molecule of type
$j$ is chosen randomly and deleted from $\Lambda$. Similarly, the
(input) rate of the jumps $n_{j}\rightarrow n_{j}+1$ is denoted by
$\lambda_{j}^{(i)}$, that is a molecule of type $j$ is put uniformly
in $\Lambda$ with this rate. Dependence of both rates on the concentrations
can be quite different. To get limiting I/O process after (canonical)
scaling one can assume that \begin{equation}
\lambda_{j}^{(0)}=f_{j}^{(0)}\Lambda,\qquad\lambda_{j}^{(i)}=f_{j}^{(i)}\Lambda,\label{IOfunc}\end{equation}
where $f_{j},g_{j}>0$ are some functions of all $c_{1}^{(\Lambda)},\ldots,c_{J}^{(\Lambda)}$,
$c_{j}^{(\Lambda)}={n_{j}}/{\Lambda}$ are the concentrations. However,
mostly we restrict ourselves to the case when $f_{j}$ are functions
of $c_{j}$ only. In other words, an individual type $j$ molecule
leaves the volume with rate ${f_{j}^{(0)}(c_{j}^{(\Lambda)})}/{c_{j}^{(\Lambda)}}$.
Denote \begin{equation}
f_{j}=f_{j}^{(i)}-f_{j}^{(0)}.\label{IOfunc1}\end{equation}
\medskip{}
\noindent \underline{\sl Stochastic chemical kinetics$\vphantom{,}$}
\smallskip{}
The hierarchy presented here depends on what parameters of a molecule
are taken into account. In stochastic chemical kinetics only type
is taken into account. The state of the system is given by the vector
$(n_{1},\ldots,n_{J})$. There are also $R$ reaction types and the
reaction of the type $r=1,\ldots,R$, can be written as\[
\sum_{j}\nu_{jr}M_{j}=0\]
where we denote by $M_{j}$ a $j$ type molecule, and the stoihiometric
coefficients $\nu_{jr}>0$ for the products and $\nu_{jr}<0$ for
the substrates. One event of type $r$ reaction corresponds to the
jump $n_{j}\rightarrow n_{j}+\nu_{jr},\; j=1,\ldots,J.$ Classical
polynomial expressions (most commonly used)\[
\lambda_{r}=A_{r}\prod\limits _{j:\nu_{jr}<0}n_{j}^{-\nu_{jr}}\]
for the rates of these jumps define a continuous time Markov process,
a kind of random walk in $\Z_{+}^{J}$. This dependence can be heuristically
deduced from local microdynamics. However, polynomial dependence is
not the only possibility, see \cite{Sava}. Moreover, there can be
various scalings for these rates. The scaling\[
A_{r}=a_{r}\Lambda^{\gamma_{r}+1},\qquad\gamma_{r}=\sum_{j:\nu_{jr}<0}\nu_{jr}\]
where $a_{r}$ are some constants and $\Lambda$ is some large parameter,
is called canonical because the classical chemical kinetics equations
\begin{equation}
\frac{dc_{j}(t)}{dt}=\sum_{r}R_{j,r}(\vec{c}(t))\label{cck}\end{equation}
for the densities\[
c_{j}(t)=\lim_{\Lambda\rightarrow\infty}\Lambda^{-1}n_{j}^{(\Lambda)}(t)\]
follow in the large $\Lambda$ limit, with some polynomials $R_{j,r}$
(see below and \cite{MaPiRy}).
\begin{problem} It is important to give (at least heuristic) local
probabilistic models to explain other than polynomial dependence and
scalings for the rates. For example for arbitrary homogeneous functions
as in \cite{Sava}. \end{problem}
It is assumed that at time $t=0$ as $\Lambda\rightarrow\infty,$
$\Lambda^{-1}n^{(\Lambda)}(0)\longrightarrow c(0).$
Chronologically, the first paper in stochastic chemical kinetics was
by Leontovich \cite{Leo}, which appeared from discussions with A.N.~Kolmogorov.
Other references see in \cite{GadLeeOth}. In 70s stochastic chemical
kinetics for small $R,J$ was studied intensively, see reviews \cite{McQua,Kal}.
At the same time the general techniques to get limiting equations
(\ref{cck}) appears in probability theory \cite{VenFre,EthKur}.
Now there are many experimental arguments in favor of introducing
stochasticity in chemical kinetics \cite{AdAr,ArRoAd,GadLeeOth}.
\medskip{}
\noindent \underline{\sl Stochastic energy redistribution}
\smallskip{}
In the classical Kac model \cite{Kac} the molecules $i=1,\ldots,N$
have the same type, but each molecule $i$ has a velocity $v_{i}$
or kinetic energy $T_{i}$. In collisions the velocities (or the kinetic
energies) change somehow. There is still continuing activity with
deeper results concerning the Kac model, in particular convergence
rate, see for example \cite{CaCaLo}.
One should merge Kac type models with stochastic chemical kinetics.
Then each molecule $i$ acquires a pair $(j_{i},T_{i})$ of parameters:
type $j$ and kinetic energy $T$. However this is not sufficient
to get energy redistribution. One should introduce also {}``chemical''
energy. As it is commonly accepted, the general idea is that the energy
of chemical bonds of a substrate molecule can be redistributed between
product molecules, part of the energy transforming into heat. To describe
this phenomena in well-defined terms we introduce fast and slow reactions.
Fast reactions do not touch chemical energy, that is types, but slow
reactions may change both kinetic and chemical energies, thus providing
energy redistribution between heat and chemical energy.
Examples of reactions:
1. All chemical reactions are assumed slow --- unary (unimolecular)
$A\rightarrow B$, binary $A+B\rightarrow C+D$, synthesis $A+B\rightarrow C$,
decay $C\rightarrow A+B$ etc. In any considered reaction the total
energy conservation is assumed, that is the sum of total energies
in the left side is equal to the sum of total energies in the right
side of the reaction equation.
2. Fast binary reactions of the type $A+B\rightarrow A+B$, which
correspond to elastic collisions and draw the system towards equilibrium.
3. Fast process of heat exchange with the environment, with reactions
of the type $A+B\rightarrow A+B$, but where one of the molecules
is an outside molecule.
If there is no input and output, then the Markov jump process is the
following. Consider any subset $i_{1}<\ldots<i_{m(r)}$ of $m(r)=-\sum_{j:\nu_{jr}<0}\nu_{jr}$
substrate molecules for reaction of type $r$.
On the time interval $(t,t+dt)$ these molecules have a {}``collision''
with probability ${\Lambda^{-(m(r)-1)}}b_{r}\, dt$, where $b_{r}$
is some constant. Let the parameters of these molecules be $j_{k}=j(i_{k}),T_{k}=T(i_{k})$.
Denote \[
T=\sum_{i=1}^{m}T_{i},\qquad K=\sum_{i=1}^{m}K_{j_{i}}\]
and $T^{\prime},K^{\prime}$ are defined similarly for the parameters
$j_{1}^{\prime},\ldots,j_{m}^{\prime},T_{1}^{\prime},\ldots,T_{m^{\prime}}^{\prime}$
of $m^{\prime}$ product molecules. The reaction occurs only if \begin{equation}
T+K-K^{\prime}\geq0\label{energycon}\end{equation}
and then the energy parameters of the product particles at time $t+0$
have the distribution defined by some conditional density $P_{r}(T_{1}^{\prime},\ldots,T_{m^{\prime}-1}^{\prime}|T_{1},\ldots,T_{m})$
on the set $0\leq T_{1}^{\prime}+\ldots+T_{m^{\prime}-1}^{\prime}\leq T+K-K^{\prime}$.
By energy conservation then\[
T_{m^{\prime}}^{\prime}=T+K-K^{\prime}-\sum_{i=1}^{m^{\prime}-1}T_{i}^{\prime}.\]
This defines a Markov process $M_{\bar{A}}(t)$ on the finite-dimensional
space (note that $T\in R_{+}$) \[
Q_{\bar{A}}=\bigcup_{(n_{1},\ldots,n_{J})}R_{+}^{n_{1}}\times\ldots\times R_{+}^{n_{J}}\]
where the union is over all vectors $(n_{1},\ldots,n_{J})$ such
that for the array $\bar{A}=(A_{1},\ldots,A_{Q})$ of positive integers
and for any atom type $q=1,\ldots,Q,$ \[
\sum_{j}n_{j}a_{jq}=A_{q}\]
where $a_{jq}$ is the number of atoms of type $q$ in the $j$ type
molecule. In other words, each atom type defines the conservation
law $A_{q}=\mathrm{const}$.
Now, using conditional densities $P_{r}$, we define the {}``one-particle''
transition kernel \begin{equation}
P(t;j_{1},T_{1}|j,T)=\sum_{r}P^{(r)}(t;j_{1},T_{1}|j,T),\label{kernel}\end{equation}
that is the sum of terms $P^{(r)}$ corresponding to reactions $r$,
which we define for some reaction types. For unimolecular reactions
$j\rightarrow j_{1}$ the product kinetic energy $T_{1}$ is uniquely
defined, thus $P_{j\rightarrow j_{1}}$ is trivial and for some constants
$u_{jj_{1}},$ \[
P^{(j\rightarrow j_{1})}=u_{jj_{1}}\delta(T+K-K_{1}-T_{1}).\]
For binary reactions $j,j'\rightarrow j_{1},j'_{1},$ \begin{align*}
P^{(j,j'\rightarrow j_{1},\, j'_{1})} & =\sum_{j^{\prime},\, j_{1}^{\prime}}\int dT^{\prime}dT_{1}^{\prime}b_{j,j'\rightarrow j_{1},\, j'_{1}}P_{j,j'\rightarrow j_{1},\, j'_{1}}(T_{1}|T,T^{\prime})\\
& \quad\times c_{t}(j^{\prime},T^{\prime})\,\delta(T+K+T^{\prime}+K^{\prime}-K_{1}-T_{1}-K_{1}^{\prime}-T_{1}^{\prime}).\end{align*}
In particular, for {}``fast'' collisions (which do not change type)
we have the same transition kernel but with $j=j_{1},j'=j'_{1}$.
We see that $P^{(j,j'\rightarrow j_{1},j'_{1})}$ depend on the concentrations
$c_{t}(j,T)=p_{t}(j,T)\, c(t).$ They are defined via the Boltzman
type equation \begin{align}
\frac{\partial c_{t}(j_{1},T_{1})}{\partial t} & =f_{j}(c_{j})+\sum_{j}\int\big(P(t;j_{1},T_{1}|j,T)\, c_{t}(j,T)\notag\\
& \quad-P(t;j,T|j_{1},T_{1})\, c_{t}(j_{1},T_{1})\big)\, dT\label{bol_1}\end{align}
which is similar to the Kolmogorov equation but includes also birth
and death terms.
All technicalities about the derivation of the limiting processes
see in Appendix of \cite{Mal3}.
\medskip{}
\underline{\sl Space dynamics}
\noindent \smallskip{}
To get thermodynamics we need also volume, pressure etc. Thus it is
necessary to define space dynamics and also scaling limit.
In the jump process, defined above, each particle $i$ independently
of the others, in random time moments \[
\tau_{i}(\omega)<t_{1i}(\omega)<\ldots<t_{in}(\omega)<\ldots<\sigma_{i}(\omega)\]
changes its type and kinetic energy (thus velocity). For each trajectory
$\omega$ of the jump process we define the local space dynamics as
follows. It does not change types, energies, velocities, but only
coordinates. If at jump moment $t$ of the trajectory $\omega$ the
particle acquires velocity $\vec{v}(\omega)=\vec{v}(t+0,\omega)$
and has coordinate $\vec{x}(t,\omega)$, then at time $t+s$ \begin{equation}
\vec{x}(t+s,\omega)=\vec{x}(t,\omega)+\vec{v}(\omega)s\label{trans}\end{equation}
unless the next event (jump), concerning this particle, of the trajectory
$\omega$ occurs on the time interval $\left[t,t+s\right]$. We assume
periodic boundary conditions or elastic reflection from the boundary.
We denote this process by $X_{\Lambda}(t)$, the state space of this
process is the sequence of finite arrays $X_{i}=\left\{ j_{i},\vec{x}_{i},\vec{v}_{i}\right\} $.
Thus each particle $i$ has a piecewise linear trajectory in the time
interval $(\tau_{i},\sigma_{i})$.
\begin{theorem} The thermodynamic limit $X^{0,\infty}(t)=\mathfrak{X}_{c}(t)$
of the processes $X_{\Lambda}(t)$ exists and its distribution belongs
to $\mathfrak{M}^{0,\infty}$. \end{theorem}
\textbf{Proof} See \cite{Mal3}.
\section{Scaling limit}
Now we define more restricted (than $\mathfrak{M}$) manifolds of
probability measures on $\mathbf{X}$: the grand canonical ensemble
for a mixture of ideal gases with one important difference --- fast
degrees of freedom are gaussian and slow degrees of freedom are constants
$K_{j}$, depending only on $j$.
We consider a finite number $n_{j}$ of particles of types $j=1,\ldots,J$
in a finite volume $\Lambda$. Remind that for the ideal gas of the
$j$ type particles the grand partition function of the Gibbs distribution
is \begin{align*}
\Theta(j,\beta) & =\sum_{n_{j}=0}^{\infty}\frac{1}{n_{j}!}\bigg(\prod\limits _{i=1}^{n_{j}}\int_{\Lambda}\int_{\R^{3}}\int_{\mathbf{I}_{j}}d\vec{x}_{j,i}\, d\vec{v}_{j,i}\bigg)\exp\beta\bigg(n_{j}(\mu_{j}-K_{j})-\sum_{i=1}^{n_{j}}\frac{m_{j}v_{j,i}^{2}}{2}\bigg)\\
& =\sum_{n_{j}=0}^{\infty}\frac{1}{n_{j}!}(\Lambda\lambda)^{n_{j}}\exp\beta(\mu_{j}-K_{j})n_{j}=\exp(\Lambda\lambda_{j}\exp\beta\hat{\mu}_{j})\end{align*}
where \[
\lambda_{j}=\beta^{-{3}/{2}}\Bigl(\frac{2\pi}{m_{j}}\Bigr)^{{3}/{2}},\qquad\hat{\mu}_{j}=\mu_{j}-K_{j}.\]
General mixture distribution of $J$ types is defined by the partition
function $\Theta=\prod_{j=1}^{J}\Theta(j,\beta)$. The limiting space
distribution of type $j$ particles is the Poisson distribution with
concentration $c_{j}$. We will need the formulas relating $c_{j}$
and $\mu_{j}$: \begin{align}
c_{j} & =\frac{\langle n_{j}\rangle_{\Lambda}}{\Lambda}=\beta^{-1}\frac{\partial\ln\Theta}{\partial\mu_{j}}=\lambda_{j}\exp\beta\hat{\mu}_{j},\notag\\
\mu_{j} & =\beta^{-1}\ln\Bigl(\frac{\langle n_{j}\rangle}{\Lambda}\lambda_{j}^{-1}\Bigr)=\mu_{j,0}+\beta^{-1}\ln c_{j}+K_{j},\end{align}
where $\mu_{j,0}=-\beta^{-1}\ln\lambda_{j}$ is the so called standard
chemical potential, it corresponds to the unit concentration $c_{j}=1$.
We put $c=c_{1}+\ldots+c_{J}$.
We will need Gibbs free energy $G$ and the limiting Gibbs free energy
per unit volume\[
g=\lim_{\Lambda\rightarrow\infty}\frac{G}{\Lambda}=\sum\mu_{j}c_{j}.\]
Define by $\mathfrak{M}_{0}\subset\mathfrak{M}$ the set of all such
measures for any $\beta,\mu_{1},\ldots,\mu_{J}$, and by $\mathfrak{M}_{0,\beta}$
its subset with fixed $\beta$.
In the process defined above the kinetic energies are independent
but may have not $\chi^{2}$ distributions, that is the velocities
may not have Maxwell distribution. We force them to have it by specifying
some trend to equilibrium process (elastic collisions) and heat transfer
(elastic collisions with outside molecules) processes.
Assume that there is a family $M(a),0\leq a<\infty$, of distributions
$\mu_{a}$ on $R_{+}$ with the following property. Take two i.i.d.\
random variables $\xi_{1},\xi_{2}$ with the distribution $M(a)$.
Then their sum $\xi=\xi_{1}+\xi_{2}$ has distribution $M(2a)$. We
assume also that $a$ is the expectation of the distribution $M(a)$.
Denote $p(\xi_{1}|\xi)$ the conditional density of $\xi_{1}$ given
$\xi$, defined on the interval $[0,\xi]$. We put\[
P^{(f)}(T_{1}|T,T^{\prime})=p(T_{1}|T+T^{\prime})\]
and of course $T_{1}^{\prime}=T+T^{\prime}-T_{1}$. Denote the corresponding
generator by $H_{N}^{(f)}$.
We model heat transfer similarly to the fast binary reactions, as
random {}``collision'' with outside molecules in an infinite bath,
which is kept at constant inverse temperature $\beta$. The energy
of each outside molecule is assumed to have $\chi^{2}$ distribution
with $3$ degrees of freedom and with parameter $\beta$. More exactly,
for each molecule $i$ there is a Poisson process with some rate $h$.
Denote by $t_{ik},k=1,2,\ldots,$ its jump moments, when it undergoes
collisions with outside molecules. At this moments the kinetic energy
$T$ of the molecule $i$ is transformed as follows. The new kinetic
energy $T_{1}$ after transformation is chosen correspondingly to
conditional density $p$ on the interval $[0,T+\xi_{ik}]$, where
$\xi_{ik}$ are i.i.d.\ random variables having $\chi^{2}$ distribution
with density $cx^{{1}/{2}}\exp(-\beta x)$. Denote the corresponding
conditional density by $P^{(\beta)}(T_{1}|T)$. In fact, this process
amounts to $N$ independent one-particle processes, denote the corresponding
generator $H_{N}^{(\beta)}$.
Thus we can write the generator as
\[
H=H(s_{f},s_{\beta})=H^{(r)}+s_{f}H^{(f)}+s_{\beta}H^{(\beta)}\]
where $H^{(r)}$ corresponds to slow reactions and $s_{f},s_{\beta}$
are some large scaling factors, which eventually will tend to infinity.
We will force the kinetic energies to become $\chi^{2}$ using the
limit $s_{f}\rightarrow\infty$.
\begin{theorem} The limits in distribution \[
\mathfrak{C}_{c}(t)=\lim_{s_{f}\rightarrow\infty}\mathfrak{X}_{c}(t),\qquad\mathfrak{O}_{c,\beta}(t)=\lim_{s_{\beta}\rightarrow\infty}\mathfrak{C}_{c}(t)\]
exist for any fixed $t$. Moreover, the manifold $\mathfrak{M}_{0}$
is invariant with respect to the process $\mathfrak{C}_{c}(t)$ for
any fixed rates $u,b,h$. The manifolds $\mathfrak{M}_{0,\beta}$
are invariant with respect to $\mathfrak{O}_{c,\beta}(t)$. \end{theorem}
Thus, in the process $\mathfrak{C}_{c}(t)$ the velocities have Maxwell
distribution at any time moment. For the process $\mathfrak{O}_{c,\beta}(t)$
moreover, at any time $t$ the inverse temperature is equal to $\beta$,
that is there is heat exchange with the environment. Our individual
molecules still undergo Markov process, but simplified. At the same
time, the macrovariables undergo deterministic evolution on $\mathfrak{M}_{0,\beta}$.
\medskip{}
\underline{\sl Markov property --- chemical kinetics restoration }
\noindent \smallskip{}
Note that initially the jump rates depend on the energies. We show
that, after the scaling limit, the process restricted on the types
will also be Markov. We assume that there are only unary and binary
reactions but we do not need reversibility assumption here.
\begin{lemma} The process, projected on types, that is the process
$(n_{1}(t),\ldots,$ $n_{J}(t))$ is Markov. It is time homogeneous
for unary reaction system and time inhomogeneous in general. \end{lemma}
\textbf{Proof} \
Recall that the jump rates were assumed to have simplest energy dependence,
that is collisions occur independently of the energies, but reactions
occur only if energy condition (\ref{energycon}) is satisfied. Write
$g_{\beta}(r)=\P(\left|\xi\right|>r)$ for the $\chi^{2}$ random
variable $\xi$ with inverse temperature $\beta$.
Assume $K_{1}\leq\ldots\leq K_{J}$ and consider first the case of
unary reactions. It is easy to see that the process $\mathfrak{O}_{c,\beta}(t)$
can be reduced to the Markov chain on $\left\{ 1,\ldots,J\right\} $
with rates $v_{jj^{\prime}}=u_{jj^{\prime}}$ if $j\geq j^{\prime}$,
and $v_{jj'}=g_{\beta}(K_{j^{\prime}}-K_{j})u_{jj^{\prime}}$ if $j<j^{\prime}$.
We used here that the kinetic energy distribution is $\chi^{2}$ at
any time moment.
Similarly for the binary reaction $j,j^{\prime}\rightarrow j_{1},\, j_{1}^{\prime}$
we define the renormalized Markov transition rates as $c(j,j^{\prime}\rightarrow j_{1},\, j_{1}^{\prime})=b_{j,j^{\prime}\rightarrow j_{1},j_{1}^{\prime}}$
if $K_{j}+K_{j^{\prime}}\geq K_{j_{1}}+K_{j_{1}^{\prime}}$ and \[
c(j,j^{\prime}\rightarrow j_{1},j_{1}^{\prime})=b_{j,j^{\prime}\rightarrow j_{1},j_{1}^{\prime}}\P\{\left|\xi_{1}+\xi_{2}\right|>K_{j_{1}}+K_{j_{1}^{\prime}}-(K_{j}+K_{j^{\prime}})\}\]
if $K_{j}+K_{j^{\prime}}<K_{j_{1}}+K_{j_{1}^{\prime}}$. Here $\xi_{i}$
are independent and $\chi^{2}$ with inverse temperature $\beta$.
It is crucial here the use of the scaling limit for fast reactions.
Thus, in the thermodynamic limit we get the equations without the
energies, that is the classical chemical kinetics \begin{equation}
\frac{dc_{j}(t)}{dt}=\sum_{r}R_{j,r}(\vec{c}(t))+f(c_{j}).\label{occk}\end{equation}
\underline{\sl Example: monotonicity of Gibbs free energy
for closed system with only unary} \underline{\sl reactions$\vphantom{,}$}
\noindent \smallskip{}
Assume now that the continuous time Markov chain on $\{1,\ldots,J\}$
with rates $u_{jj'}$ is irreducible. We say that this Markov chain
is compatible with the equilibrium conditions \begin{equation}
\mu_{1}=\ldots=\mu_{J}\label{equi}\end{equation}
if its stationary probabilities $\pi_{j}$, or stationary concentrations
$c_{j,e}=\pi_{j}c$, satisfy the following conditions \[
\ln c_{1,e}+(\mu_{1,0}+K_{1})=\ldots=\ln c_{J,e}+(\mu_{J,0}+K_{J}).\]
\textbf{Remark} This compatibility condition should appear naturally
in local dynamics, but it is not clear how to deduce it in the mean
field dynamics. Note that reversibility is not a sufficient condition
for the compatibility condition.
To exhibit monotonicity for dynamics one needs special Lyapounov functions
in the space of distributions. For Markov chains this is the Markov
entropy with respect to stationary measure $\pi_{j},$ \[
S_{M}=\sum p_{j}\ln\frac{p_{j}}{\pi_{j}},\]
see for example \cite{Ligg}.
Recall that the equilibrium function --- Gibbs free energy $g(t)$
--- undergoes deterministic evolution together with the parameters
$\mu_{j}$ or $c_{j}$. We will show that at any time moment it coincides
with the Markov entropy up to multiplicative and additive constants.
\begin{theorem} If the compatibility condition (\ref{equi}) holds,
then \begin{equation}
g(t)=\mu c+\frac{1}{\beta C}S_{M}(t)\label{GFE1}\end{equation}
and monotone behaviour of the Gibbs free energy density follows.
\end{theorem}
\textbf{Proof} \ We have \begin{align}
g & =\lim_{\Lambda}\frac{G}{\Lambda}=\sum_{j}c_{j}\mu_{j}=\beta^{-1}\sum_{j}c_{j}\ln c_{j}+\sum_{j}c_{j}(\mu_{j,0}+K_{j})\label{free_1}\\
& =\beta^{-1}\sum_{j}c_{j}\ln c_{j}+\sum_{j}c_{j}(\mu-\beta^{-1}\ln c_{j,e})\notag\\
& =\mu c+\beta^{-1}\sum_{j}c_{j}\ln\frac{c_{j}}{c_{j,e}}\notag\end{align}
where the first and the second equalities are the definitions, in
the third and the fourth equalities we used the formula \begin{align}
\mu_{j} & =\beta^{-1}\ln\Big(\frac{\langle n_{j}\rangle}{\Lambda}\lambda_{j}^{-1}\Big)=\mu_{j,0}+\beta^{-1}\ln c_{j}+K_{j},\intertext{where}\mu_{j,0} & =-\beta^{-1}\ln\lambda_{j}=-\beta^{-1}\Big(-\frac{d_{j}}{2}\ln\beta+\ln B_{j}\Big)\label{standard}\end{align}
is the so called standard chemical potential, it corresponds to the
unit concentration $c_{j}=1$ for the equilibrium density, see for
example \cite{Mal3}.
At the same time\[
S_{M}=\sum p_{j}\ln\frac{p_{j}}{\pi_{j}}=C\sum c_{j}\ln\frac{c_{j}}{c_{j,e}}.\]
We see that for unary reactions one does not need reversibility assumption.
\underline{\sl Monotonicity of Gibbs free energy for closed
system with binary reactions}
\noindent \smallskip{}
For binary reactions a similar result holds (we will not formulate
it formally). However, we do not have Markov evolution for the concentrations
anymore. Instead, we have the Boltzman equation for the concentrations,
that is the so called nonlinear Markov chain on $\{1,\ldots,J\}$.
Then, instead of the Markov entropy one should take the Boltzman entropy
with respect to some one-point distribution $p_{j}^{(0)}$ (see definitions
in \cite{MaPiRy})\[
S_{H}(t)=-\sum p_{j}(t)\ln\frac{p_{j}(t)}{p_{j}^{(0)}}\]
which coincides with the Markov entropy for ordinary Markov chains.
For the monotonic behaviour of the Boltzman entropy, one should assume
reversibility or a more general condition --- unitarity, called local
equilibrium in \cite{MaPiRy}. Under this condition the monotonicity
of the Boltzman entropy was proved in \cite{MaPiRy}. We get the same
formula as (\ref{GFE1}) if we replace $S_{M}$ by $-S_{H}$.
Note that under these conditions $p_{j}(t)$ is a time inhomogeneous
Markov chain. In fact, in the long run, that is as $t\rightarrow\infty$,
the transition rates for one-particle inhomogeneous Markov chain,
in the vicinity of the fixed point, is asymptotically homogeneous.
This shows that binary case is asymptotically close to the unary case.
\section{Open thermodynamic compartments}
\noindent \underline{\sl Reversible and nonreversible processes}
\smallskip{}
Our systems in finite volume evolve via Markov dynamics. It is not
known when and how this dynamics could rigorously be deduced from
the local physical laws. However, there are many arguments that reversibility
is a necessary condition for this. Reversibility is a particular case
of the unitarity property of the scattering matrix of a collision
process. It was called local equilibrium condition in \cite{MaPiRy,FaMaPi}).
The reversibility gives strong corollaries for the scaling limits
--- 1)~Boltzman monotonicity and 2)~attractive fixed points. We
call chemical networks with properties 1) and 2) {\em thermodynamic
compartments}. Denote the class of such systems $\mathbf{T}$. These
systems are a little bit more general than the systems, corresponding
to the systems with local physical laws (in particular, having convergence
to equilibrium property). For example, any unimolecular reaction system
belongs to $\mathbf{T}$, because, as we saw above, the Markov entropy
is the Boltzman entropy here. However, biological systems obviously
are not of class $\mathbf{T}$. There are different ways to generalize
class $\mathbf{T}$ systems.
The first one is quite common: in chemical and biological systems
stochastic processes usually are not assumed to be reversible. However,
without the reversibility assumption the time evolution could be as
complicated as possible (periodic orbits, strange attractors etc.).
That has advantages --- one can adjust to real biological situations,
and disadvantages --- too many parameters, even arbitrary functions.
Normally, the rate functions $R_{j,r}$ can be rather arbitrarily
chosen, typical example where this methodology is distinctly pronounced
is \cite{CCCCNT}, connections with physics lost etc. In other words,
theory becomes meaningless when one can adjust it to any situation.
Another way could be a hierarchy of procedures to introduce nonreversibility
in a more cautious way. Each further step to introduce nonreversibility
is as simple as possible and each is related to time scaling, for
example, reversible dynamics is time scaled and projected on a subsystem.
We start to study here the simplest type of such procedures. In our
case the Markov generator will be the sum of two terms, \begin{equation}
H=H_{\mathit{rev}}+H_{\mathit{nonrev}},\label{rev-nonrev}\end{equation}
where the first one is reversible and the other one is not, but the
latter corresponds only to input and output processes. One of technical
reasons to choose such nonreversible hamiltonian is to keep invariance
of the manifolds $\mathfrak{M},\mathfrak{M}_{0},\mathfrak{M}_{0,\beta}$.
In principle, another philosophy is possible --- large deviation or
other rare event conditioning, this we do not discuss here.
\medskip{}
\underline{\sl Example {\rm 1:} steady states for open unimolecular systems}
\noindent \smallskip{}
We consider the case with $J=2$ and unary reactions only, however
the following assertions help to understand how more general open
systems can behave. Consider first the thermodynamic limit, and then
the stochastic finite volume problem.
In the thermodynamic limit the following equations for the concentrations
$c_{j}(t),j=1,2$, hold:
\[
\frac{dc_{1}}{dt}=-\nu_{1}c_{1}+\nu_{2}c_{2}+f_{1},\qquad\frac{dc_{2}}{dt}=\nu_{1}c_{1}-\nu_{2}c_{2}+f_{2},\]
where $\nu_{1}=u_{12},\;\nu_{2}=u_{21}$ and $f_{j}$ are defined
by (\ref{IOfunc1}). Possible positive (i.e., $c_{1},c_{2}>0$) fixed
points satisfy the following system:
\[
f_{1}(c_{1})+f_{2}(c_{2})=0,\qquad-\nu_{1}c_{1}+\nu_{2}c_{2}+f_{1}(c_{1})=0.\]
For example, for constant $f_{j}$ a positive fixed point exists for
any $c$ sufficiently large and equals\[
c_{1}=\frac{\nu_{2}c-f_{2}}{\nu_{1}+\nu_{2}},\qquad c_{2}=\frac{\nu_{1}c-f_{1}}{\nu_{1}+\nu_{2}}.\]
In the linear case, that is for $f_{j}=a_{j}c_{j}$, for the existence
of a positive fixed point it is necessary and sufficient that $a_{j}$
have different signs and $|a_{j}|<\nu_{1}+\nu_{2}$. Then the positive
fixed point is unique and is defined by\[
c_{1}=\frac{\nu_{2}c}{\nu_{1}+\nu_{2}-a_{1}}.\]
For faster than linear growth of $f_{j}$ fixed points cannot exist
for large $c$.
We see from these formulas that the equilibrium fixed point\[
c_{1}=\frac{\nu_{2}c}{\nu_{1}+\nu_{2}},\qquad c_{2}=\frac{\nu_{1}c}{\nu_{1}+\nu_{2}}\]
(for the corresponding closed system) is slightly perturbed if $f_{j}$
(or $a_{j}$) are small. Moreover, the perturbed fixed point is still
attractive. This is true in more general situations as well.
Now consider the stochastic (finite volume) case.
\begin{proposition} Assume that $f_{j}$ are constants. In a finite
volume the process is ergodic if $\sum f_{j}<0$, transient if $\sum f_{j}>0$
and null recurrent if $\sum f_{j}=0$. \end{proposition}
\textbf{Proof} \
Note that the number of particles is conserved and the number of states
is finite if there is no I/O, otherwise the Markov chain is countable:
a random walk on $Z_{+}^{2}=\{(n_{1},n_{2}):n_{1}n_{2}\geq0\}$. There
are jumps $(n_{1},n_{2})\rightarrow(n_{1}-1,n_{2}+1)$ or $(n_{1},n_{2})\rightarrow(n_{1}+1,n_{2}-1)$
due to reactions, denote their rates $\nu_{1}n_{1},\nu_{2}n_{2}$
correspondingly. There are also jumps $(n_{1},n_{2})\rightarrow(n_{1}\pm1,n_{2}),(n_{1},n_{2})\rightarrow(n_{1},n_{2}\pm1)$
due to input-output with the parameters $a_{j}\Lambda$ and $b_{j}\Lambda$
correspondingly.
Transience and ergodicity can be obtained using Lyapounov function
$n_{1}+n_{2}$ and the results from \cite{FaMaMe}. To prove null
recurrence note that for sufficiently large $c$ the system should
be in the neighbourhood of the fixed point, which exists for $c$
sufficiently large. Thus one can also use the same Lyapounov function.
General conclusion is that only null recurrent case is interesting.
However, models with constant rates are too naive. It is reasonable
that there are regulation mechanisms which give more complex dependence
of $f_{j}$ on the rates. Unfortunately, there is no firm theoretical
basis to get exact dependence of reaction and I/O rates on the densities.
\medskip{}
\underline{\sl Example {\rm 2:} stochastic Michaelis\tire Menten kinetics}
\noindent \smallskip{}
The generator for Michaelis\tire Menten kinetics is of type (\ref{rev-nonrev})
only in some approximation. This model has 4 types of molecules: $E$
(enzyme), $S$ (substrate), $P$ (product) and $ES$ (substrate-enzyme
complex). There are 3 reactions \[
E+S\rightarrow ES,\quad ES\rightarrow E+S,\quad ES\rightarrow E+P\]
with the rates $k_{1}\Lambda^{-1}n_{E}n_{S},k_{-1}n_{ES},k_{2}n_{ES}$
correspondingly. We can also fix somehow the output rate for $P$
and input rate for $S$.
If $k_{2}=0$ then, as a zero'th approximation, we have a reversible
Markov chain. In fact, there are conservation laws \[
n_{E}+n_{ES}=m(E),\qquad n_{S}+n_{ES}=m(S)\]
for some constants $m(E),m(S)$. Thus we will have random walk for
one variable, say $n_{ES}$, on the interval $[0,\mathrm{min}(m(E),m(S))]$,
with jumps $n_{ES}\rightarrow n_{ES}\pm1$. Such random walks are
always reversible. The stationary probabilities for this random walk
are concentrated around the fixed point of the limiting equations
of the classical kinetics \begin{equation}
\frac{dc_{ES}}{dt}=k_{1}c_{S}c_{E}-(k_{-1}+k_{2})c_{ES}\label{MM1}\end{equation}
defined by\[
c_{ES}=\frac{c_{S}}{a+bc_{S}}\]
for some constants $a,b$, defined by $m(E),m(S)$. If $k_{2}>0$
but small compared to $k_{1},k_{-1},$ then up to the first order
in $k_{2}$ we have the $P$ production speed\[
\frac{dc_{P}}{dt}=k_{2}c_{ES}=k_{2}\frac{c_{S}}{a+bc_{S}}.\]
We could also look on this kinetics as on the simple random walk.
We have to introduce (arbitrarily) output rate for the product $P$
and adjust the input rate of $S$ so that the system becomes null-recurrent.
In fact, due to the conservation law $n_{E}+n_{ES}=m(E)$ we have
random walk on the half strip $\left\{ (n_{S},n_{ES})\right\} =Z_{+}\times(0,m(E))$.
The null-recurrence condition can be obtained using methods of \cite{FaMaMe},
we will not discuss this here.
\section{Network of thermodynamic compartments}
We call thermodynamic compartments, introduced above, {\em networks
of rank}~1. We saw that they have fixed points, and thermodynamics
plays the central role there. It can be some tightly dependent and/or
space localized system of chemical reactions.
Network of rank 2 consists of vertices $\alpha$ --- networks of rank
1, and directed edges, that is compartments are organized in a directed
graph. Directed edge from compartment $\alpha$ to compartment $\alpha^{\prime}$
means that there is a matter flow from $\alpha$ to $\alpha^{\prime}$.
Matter exchange between two compartments suggests some transport mechanism.
It is natural that there is a time delay between the moments of departure
from $\alpha$ and arrival to $\alpha^{\prime}$. The simplest probabilistic
model could be the following. Each $j$ type molecule leaves $\alpha$
for the destination $\alpha'$ with rates $f_{j,\alpha,\alpha'}$,
similar to defined in (\ref{IOfunc}), and after some random time
$\tau(j,\alpha,\alpha')$ arrives to $\alpha^{\prime}$. Times $\tau(j,\alpha,\alpha')$
are independent and their distribution depends only on $j,\alpha,\alpha^{\prime}$.
One can imagine that there is an effective distance $L(\alpha,\alpha^{\prime})$
between $\alpha$ and $\alpha^{\prime}$ and some transportation mechanism,
which defines effective speed to go through this distance. For example,
it can be transport through membrane, which can be represented as
a layer $\left[0,L\right]\times R^{2}$ of thickness $L$. During
time $\tau(j,\alpha,\alpha')$ the particle is absent from the network,
it has left $\alpha$ but has not yet arrived to $\alpha'$.
Denote by $c_{\alpha,j}$ the concentration of type $j$ molecules
in the compartment~$\alpha$. Limiting equations are \begin{align*}
\frac{dc_{\alpha',j}(t)}{dt} & =f_{\alpha',j}^{(i)}(c_{\alpha'}(t))-f_{\alpha',j}^{(0)}(c_{\alpha'}(t))+\sum_{\alpha}f_{j,\alpha,\alpha'}(c_{\alpha}(t-\tau(j,\alpha,\alpha'))\\
& \quad-\sum_{\alpha}f_{j,\alpha',\alpha}(c_{\alpha'}(t))+\sum_{r}\nu_{\alpha',jr}R_{\alpha',r}(c_{\alpha'}(t))\end{align*}
where $c_{\alpha}=(c_{\alpha,1},\dots c_{\alpha,J})$, $f_{\alpha,j}^{(i)}$
is the input rate to $\alpha$ from external environment, $f_{\alpha,j}^{(0)}$
is the output rate from $\alpha$ to the external environment. Note
that these equations are random due to random delay times $\tau$.
In the first approximation one can consider $\tau$ constant, however
random time delays seem very essential to restore randomness on the
time scale, higher than microscopic, in the otherwise deterministic
classical chemical kinetics.
Note that the above written equations follow from a similar microscopic
model --- we will not formally formulate it, because it is obvious
from our previous constructions: the corresponding manifold is $\times_{\alpha\in A}\mathfrak{M}_{\alpha}$,
where $A$ is the set of compartments, $\mathfrak{M}_{\alpha}$ is
the manifold for the compartment $\alpha$.
The following problems and phase transitions can be discussed in the
defined model on the rigorous basis (in progress):
1. The method of thermodynamic bounds in the thermodynamic networks,
defined in \cite{Mavr}.
2. (Phase transitions due to transport rates.) Normal functioning
of the network can be close to the system $\left\{ c_{\alpha,j,e}\right\} $
of equilibrium fixed points in each compartment $\alpha$. Such situation
can be called homeostasis. Homeostatic regulation --- keeping the
system close to some system $\left\{ c_{\alpha,j,e}\right\} $. If
there is no transport, then the compartments are independent and the
fixed points inside them are pure thermodynamic. Under some transport
rates the fixed points change in a stable way, they smoothly depend
on the transport parameters. However, under some change of the transport
rates, the fixed points may change drastically: the system goes to
other basin of attraction.
3. (Phase transition due to time desynchronization.) It is known now
that even a decease can be a consequence of timing errors. For a network
of rank~2, having for instance a cyclic topology (this is called
circuit in \cite{Tho}), assume that the input rates change periodically
or randomly in time. The question is: to what process the concentrations
converge and with what speed ? This time behaviour could be the next
step in the analysis of the structure of logical networks in the sense
of \cite{Tho}. | {"config": "arxiv", "file": "1112.3798.tex"} |
TITLE: Show that $\left(1+\frac{1}{1^3}\right)\left(1+\frac{1}{2^3}\right)\left(1+\frac{1}{3^3}\right)\cdots\left(1+\frac{1}{n^3}\right) < 3$
QUESTION [6 upvotes]: I have this problem which says that for any positive integer $n$, $n \neq 0$ the following inequality is true: $$\left(1+\frac{1}{1^3}\right)\left(1+\frac{1}{2^3}\right)\left(1+\frac{1}{3^3}\right)\cdots\left(1+\frac{1}{n^3}\right) < 3$$
This problem was given to me in a lecture about induction but any kind of solution would be nice.And also I'm in 10th grade :)
REPLY [1 votes]: Less Than $\boldsymbol{3}$
The inequality
$$
1+\frac1{n^3}\lt\frac{1+\frac1{2(n-1)^2}}{1+\frac1{2n^2}}\tag1
$$
can be verified by cross-multiplying and then multiplying both sides by $2n^5(n-1)^2$; that is,
$$
2n^7-4n^6+3n^5\underbrace{-3n^3+3n^2-2n+1}_\text{$-(3n^2+2)(n-1)-1\lt0$ for $n\ge1$}\lt2n^7-4n^6+3n^5\tag2
$$
Therefore, employing a telescoping product,
$$
\begin{align}
\prod_{n=1}^\infty\left(1+\frac1{n^3}\right)
&\lt2\prod_{n=2}^\infty\frac{1+\frac1{2(n-1)^2}}{1+\frac1{2n^2}}\\
&=2\cdot\frac32\\[9pt]
&=3\tag3
\end{align}
$$
Actual Value
$$
\begin{align}
\lim_{n\to\infty}\prod_{k=1}^n\frac{k^3+1}{k^3}
&=\lim_{n\to\infty}\frac{\Gamma(n+2)\,\Gamma\!\left(n+\frac12+i\frac{\sqrt3}2\right)\Gamma\!\left(n+\frac12-i\frac{\sqrt3}2\right)}{\Gamma(2)\,\Gamma\!\left(\frac12+i\frac{\sqrt3}2\right)\Gamma\!\left(\frac12-i\frac{\sqrt3}2\right)\Gamma(n+1)^3}\tag4\\
&=\frac1{\Gamma\!\left(\frac12+i\frac{\sqrt3}2\right)\Gamma\!\left(\frac12-i\frac{\sqrt3}2\right)}\\
&\times\lim_{n\to\infty}\frac{\Gamma(n+2)\,\Gamma\!\left(n+\frac12+i\frac{\sqrt3}2\right)\Gamma\!\left(n+\frac12-i\frac{\sqrt3}2\right)}{\Gamma(n+1)^3}\tag5\\
&=\frac{\sin\left(\frac\pi2+i\frac{\pi\sqrt3}2\right)}{\pi}\times1\tag6\\[6pt]
&=\frac{\cosh\left(\frac{\pi\sqrt3}2\right)}{\pi}\tag7
\end{align}
$$
Explanation:
$(4)$: $\prod\limits_{k=1}^n(k+x)=\frac{\Gamma(n+1+x)}{\Gamma(1+x)}$ and $k^3+1=(k+1)\left(k-\frac12+i\frac{\sqrt3}2\right)\left(k-\frac12-i\frac{\sqrt3}2\right)$
$(5)$: pull out the constant factor using $\Gamma(2)=1$
$(6)$: apply Euler's Reflection Formula $\Gamma(x)\,\Gamma(1-x)=\frac\pi{\sin(\pi x)}$
$\phantom{(6)\text{:}}$ and Gautschi's Inequality, which implies $\lim\limits_{n\to\infty}\frac{\Gamma(n+x)}{\Gamma(n)\,n^x}=1$
$(7)$: $\cos(ix)=\cosh(x)$ | {"set_name": "stack_exchange", "score": 6, "question_id": 2970739} |
TITLE: A limit involving sinh
QUESTION [2 upvotes]: I'm trying to show that $$\lim_{u\to 0}\frac{\partial}{\partial u}\frac{\sinh(y\sqrt{2u}) + \sinh(x\sqrt{2u})}{\sinh((x + y)\sqrt{2u})} = -xy$$
The method I was trying resulted in pages and pages of messy computations, and I'm doubtful that this is the best way to go about it. Any ideas would be appreciated.
-- Thanks.
REPLY [0 votes]: Let $a=(x-y)/\sqrt2$ and $b=(x+y)/\sqrt2$. Then
\begin{align*}
&\lim_{u\to0}\frac{\partial}{\partial u} \frac{\sinh(x\sqrt{2u}) + \sinh(y\sqrt{2u})}{\sinh((x+y)\sqrt{2u})} \\
&= \lim_{u\to0}\frac{\partial}{\partial u} \frac{\cosh(a\sqrt u)}{\cosh(b\sqrt u)} \\
&= \lim_{u\to0}\frac{a\Bigl(\overbrace{\frac{\sinh(a\sqrt u)}{2\sqrt u}}^{\to a/2}\Bigr)\cosh(b\sqrt u) - b\cosh(a\sqrt u)\Bigl(\overbrace{\frac{\sinh(b\sqrt u)}{2\sqrt u}}^{\to b/2}\Bigr)}{\cosh^2(b\sqrt u)} \\
&= \frac{a^2-b^2}{2} \\
&= -xy
\end{align*} | {"set_name": "stack_exchange", "score": 2, "question_id": 1026882} |
TITLE: Resolve $ \frac{120}{x+y} + \frac{60}{x-y} = 6;\,\frac{80}{x+y} + \frac{100}{x-y} = 7$
QUESTION [1 upvotes]: I want to resolve this system of equations:
$$\begin{cases} \frac{120}{x+y} + \frac{60}{x-y} = 6 \\\frac{80}{x+y} + \frac{100}{x-y} = 7\end{cases}$$
I came to equations like
$$x - \frac{10x}{x-y} + y - \frac{10y}{x-y} = 20$$
and
$$-2xy - y^2 - 10y = 20 - x^2 -10x$$
I need to leave $x$ or $y$ alone and didn't succeed. Any help?
REPLY [1 votes]: You could also solve by eliminating one of the variables:
Multiply the first equation by 100 and the second by 60:
$$ \frac{12000}{x+y} + \frac{6000}{x-y}= 600$$
$$ \frac{4800}{x+y} + \frac{6000}{x-y}= 420$$
Subtract them from each other and get rid of $\frac{6000}{x-y}$:
$$ x+y = \frac{720}{180} = 40$$
Substitute $x+y$ into either of your original equations:
$$ \frac{120}{x+y} + \frac{60}{x-y}= 6 \implies \frac{120}{40} + \frac{60}{x-y}= 6 $$
$$ x-y = 20$$
Now solve these two simultaneous equations:
$$ x+y = 40, \ x-y = 20$$
Trust you can finish this off | {"set_name": "stack_exchange", "score": 1, "question_id": 1713333} |
\section{Laxness conditions}
The notion of bistable pseudofunctor produces a convenient treatment of invertible 2-cells; however, one may ask how non-invertible 2-cells are to be factorized - those as below:
\[\begin{tikzcd}
B && {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_1}", bend left=25, start anchor=45, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{f_2}"', bend right=25, start anchor=-45, from=1-1, to=1-3]
\arrow["{ \Downarrow \sigma }"{description}, draw=none, from=0, to=1]
\end{tikzcd}\]
or also if one can diagonalize lax square and not just pseudosquares. The answer is that it might depend on further, different possible laxness conditions one might stipulate.\\
A possible laxness condition is described in \cite{walker2020lax} under the notion of \emph{lax-familial} pseudofunctors. They are defined also relatively to a class of generic morphisms, but this time with a more general lax-orthogonality condition relatively to lax-squares. For this notion is important in itself and is related, see \cite{walker2020lax}, to an elegant decomposition of the conerve into a lax bicolimit of representables, we choose here to give a rather detailed account of it and a few properties that are not yet found elsewhere. However this notion will not apply to our examples, where factorization of lax cell will exist, yet processing in a totally different way.
\subsection{Lax generic cells}
\begin{definition}
Let $ U : \mathcal{A} \rightarrow \mathcal{B}$ a pseudofunctor. A \emph{lax $U$-generic morphism}\index{lax-generic morphism} is a 1-cell $n: B \rightarrow U(A)$ in $ \mathcal{B}$ such that for any pseudosquare
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A)} & {U(A_2)}
\arrow["n"', from=1-1, to=2-1]
\arrow[""{name=0, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow["{U(u)}", from=1-2, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(v)}"', from=2-1, to=2-2]
\arrow["{\sigma \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=1]
\end{tikzcd}\]
there exists a 1-cell $ w_\sigma : A \rightarrow A_1$ in $\mathcal{A}$, unique up to a unique invertible 2-cell, and unique pair of invertible 2-cells $ \nu_\sigma$ in $\mathcal{B}$ and $ \omega_\sigma $ in $\mathcal{A}$ such that $ \sigma $ decomposes as the pasting
\[\begin{tikzcd}[sep=large]
B & {U(A_1)} \\
{U(A)} & {U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}", from=1-2, to=2-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(v)}"', from=2-1, to=2-2]
\arrow["{U(w_\sigma)}"{description}, from=2-1, to=1-2]
\arrow["{\nu_\sigma \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{ \Uparrow U(\omega_\sigma)}"{description}, Rightarrow, draw=none, from=2, to=3]
\end{tikzcd}\]
and moreover those data are universal in the sense that for any other factorization of this square
\[\begin{tikzcd}[sep=large]
B & {U(A_1)} \\
{U(A)} & {U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}", from=1-2, to=2-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(v)}"', from=2-1, to=2-2]
\arrow["{U(w)}"{description}, from=2-1, to=1-2]
\arrow["{\nu \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{ \Uparrow U(\omega)}"{description}, Rightarrow, draw=none, from=2, to=3]
\end{tikzcd}\]
there exists a unique 2-cell $ \xi : w \Rightarrow w_\sigma $ such that we have the factorizations
\[\begin{tikzcd}[sep=large]
B & {U(A_1)} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["f", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_\sigma)}"{description}, from=2-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "U(w)"', curve={height=24pt}, from=2-1, to=1-2]
\arrow["U(\xi)", shorten <=3pt, shorten >=3pt, Rightarrow, from=1, to=0]
\arrow["{\Uparrow \nu_\sigma}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\end{tikzcd} = \begin{tikzcd}[sep=large]
B & {U(A_1)} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["f", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "U(w)"', from=2-1, to=1-2]
\arrow["{\Uparrow\nu}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\end{tikzcd}\]
\[\begin{tikzcd}[sep=large]
& {A_1} \\
A & {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "w_\sigma", from=2-1, to=1-2]
\arrow["u", from=1-2, to=2-2]
\arrow["v"', from=2-1, to=2-2]
\arrow["{\omega_\sigma \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=2-2]
\end{tikzcd} =
\begin{tikzcd}[sep=large]
& {A_1} \\
A & {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "w_\sigma", curve={height=-18pt}, from=2-1, to=1-2]
\arrow["u", from=1-2, to=2-2]
\arrow["v"', from=2-1, to=2-2]
\arrow["w"{name=1, anchor=center, inner sep=0, description}, from=2-1, to=1-2]
\arrow["\xi", shorten <=3pt, shorten >=3pt, Rightarrow, from=1, to=0]
\arrow["{\omega \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=2-2]
\end{tikzcd}\]
Moreover the 2-cells $ \nu_\sigma$ and $ \omega_\sigma$ must be invertible as soon as $\sigma$ is.
\end{definition}
\begin{definition}
A lax generic morphism is \emph{functorially generic} if for any morphism of pseudosquares in $\mathcal{B}//U$ as below
\[\begin{tikzcd}
B && {U(A_1)} \\
{U(A)} && {U(A_2)}
\arrow["n"', from=1-1, to=2-1]
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_1}"{description}, from=1-1, to=1-3]
\arrow["{U(u)}", from=1-3, to=2-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(v_1)}"', from=2-1, to=2-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{f_2}", start anchor=40, bend left=30, from=1-1, to=1-3]
\arrow["{\phi \Uparrow}"{description}, shorten <=2pt, shorten >=2pt, draw=none, Rightarrow, from=2, to=0]
\arrow["{\sigma_1 \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=1]
\end{tikzcd}=
\begin{tikzcd}
B && {U(A_1)} \\
{U(A)} && {U(A_2)}
\arrow["n"', from=1-1, to=2-1]
\arrow["{U(u)}", from=1-3, to=2-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(v_1)}"', curve={height=18pt}, from=2-1, to=2-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{f_2}", from=1-1, to=1-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(v_2)}"{description}, from=2-1, to=2-3]
\arrow["{\sigma_2 \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=2]
\arrow["{U(\gamma) \Uparrow}"{description}, Rightarrow, draw=none, from=2, to=0]
\end{tikzcd}\]
there is a unique 2-cell $\sigma_\phi : w_{\alpha_1} \Rightarrow w_{\alpha_2}$ in $ \mathcal{A}$ between the corresponding fillers such that
\[\begin{tikzcd}[sep=large]
B & {U(A_1)} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["{f_2}", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\sigma_2})}"{description}, from=2-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(w_{\sigma_1})}"', curve={height=24pt}, from=2-1, to=1-2]
\arrow["{\nu_{\sigma_2} \Uparrow}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\arrow["U(\xi)", shorten <=3pt, shorten >=3pt, Rightarrow, from=1, to=0]
\end{tikzcd} =
\begin{tikzcd}[sep=large]
B & {U(A_1)} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_2}", curve={height=-18pt}, from=1-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(w_{\sigma_1})}"', from=2-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{f_1}"{description}, from=1-1, to=1-2]
\arrow["{\phi \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=2]
\arrow["{\nu_{\sigma_1} \Uparrow}"{description}, Rightarrow, draw=none, from=1-1, to=1]
\end{tikzcd}\]
\[\begin{tikzcd}[sep=large]
& {A_1} \\
A & {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "{w_{\sigma_1}}"{description}, from=2-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{w_{\sigma_2}}", curve={height=-18pt}, from=2-1, to=1-2]
\arrow["u", from=1-2, to=2-2]
\arrow["{v_1}"', from=2-1, to=2-2]
\arrow["\xi"', shorten <=3pt, shorten >=3pt, Rightarrow, from=0, to=1]
\arrow["{\omega_{\sigma_1} \Uparrow}"{description, pos=0.7}, Rightarrow, draw=none, from=0, to=2-2]
\end{tikzcd} =
\begin{tikzcd}[sep=large]
& {A_1} \\
A & {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "{w_{\sigma_1}}", from=2-1, to=1-2]
\arrow["u", from=1-2, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{v_1}"', curve={height=18pt}, from=2-1, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{v_2}"{description}, from=2-1, to=2-2]
\arrow["{\gamma \Uparrow}"{description}, Rightarrow, draw=none, from=2, to=1]
\arrow["{\omega_{\sigma_1} \Uparrow}"{description, pos=0.6}, shift left=1, Rightarrow, draw=none, from=0, to=2]
\end{tikzcd}\]
\end{definition}
\begin{remark}
In the following we shall require - and for now, suppose as so - any lax generic morphism to be functorially lax generic. We conjecture that actually any lax generic is automatically functorially lax generic for free, though we choose not to go into such discussion and incorporate this condition in the definition.
\end{remark}
\begin{definition}
A \emph{$U$-lax generic 2-cell} is a 2-cell as below with $ \nu$ an lax generic 1-cell
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow["{U(u)}"', from=2-1, to=1-2]
\arrow["{\nu \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\end{tikzcd}\]
such that we have the following two conditions: \begin{itemize}
\item For any factorizations of $ \nu$ as a pasting of the following form
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(v)}"{description}, from=2-1, to=1-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u)}"', curve={height=18pt}, from=2-1, to=1-2]
\arrow["{\lambda \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{U(\zeta)}", shorten <=3pt, shorten >=3pt, Rightarrow, from=3, to=2]
\end{tikzcd}\]
there exists a unique 2-cell $ \xi : v \Rightarrow u$ which is a section of $ \zeta$ in $\mathcal{A}[A,A_1]$, that is $ \zeta\xi=1_{v}$, and such that we have a factorization of $ \lambda$ as
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}"{description}, from=2-1, to=1-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(v)}"', curve={height=18pt}, from=2-1, to=1-2]
\arrow["{\nu \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{U(\zeta)}", shorten <=3pt, shorten >=3pt, Rightarrow, from=3, to=2]
\end{tikzcd}\]
\item Any parallel pair of 2-cells in $\mathcal{A}$ whose image along $U$ are equalized by $\nu$ as depicted below
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}"{description}, from=2-1, to=1-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(v)}"', curve={height=18pt}, from=2-1, to=1-2]
\arrow["{\nu \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{U(\xi)}", shorten <=3pt, shorten >=3pt, Rightarrow, from=3, to=2]
\end{tikzcd}
=
\begin{tikzcd}
B & {U(A_1)} \\
{U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}"{description}, from=2-1, to=1-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(v)}"', curve={height=18pt}, from=2-1, to=1-2]
\arrow["{\nu \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{U(\zeta)}", shorten <=3pt, shorten >=3pt, Rightarrow, from=3, to=2]
\end{tikzcd}
\]
must actually already be equal in $\mathcal{A}$.
\end{itemize}
\end{definition}
\subsection{Lax familial pseudofunctor}
\begin{definition}
A pseudofunctor $ U : \mathcal{A} \rightarrow \mathcal{B}$ is \emph{lax familial}\index{lax familia} (or also \emph{lax stable} to remain coherent with our terminology) if we have the following conditions:\begin{itemize}
\item any arrow of the form $ f: B \rightarrow U(A)$ admits a bifactorization
\[
\begin{tikzcd}
B \arrow[rr, "f"] \arrow[rd, "n_f"'] & {} \arrow[d, "\nu_f \atop \simeq", phantom] & U(A) \\
& U(A_f) \arrow[ru, "U(u_f)"'] &
\end{tikzcd} \]
with $\nu_f: f \simeq U(u_f) n_f $ invertible and $ n_f$ a $U$-lax generic 1-cell.
\item Generic 2-cells compose: that is, for a composite 2-cell as below
\[\begin{tikzcd}
{B_1} & {B_2} & {B_3} \\
{U(A_1)} & {U(A_2)} & {U(A_3)}
\arrow["{n_1}"', from=1-1, to=2-1]
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_1}", from=1-1, to=1-2]
\arrow["{n_2}"{description}, from=1-2, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(u_1)}"', from=2-1, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u_2)}"', from=2-2, to=2-3]
\arrow[""{name=3, anchor=center, inner sep=0}, "{f_2}", from=1-2, to=1-3]
\arrow["{n_3}", from=1-3, to=2-3]
\arrow["{\nu_1 \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=1]
\arrow["{\nu_2 \Uparrow}"{description}, Rightarrow, draw=none, from=3, to=2]
\end{tikzcd}\]
with $ \nu_1$ and $ \nu_2$ generic 2-cells, then the following composite also is generic:
\[\begin{tikzcd}
{B_1} &&& {U(A_3)} \\
{U(A_1)}
\arrow["{n_1}"', from=1-1, to=2-1]
\arrow["{n_3f_2f_1}", from=1-1, to=1-4]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(u_2u_1)}"', curve={height=12pt}, from=2-1, to=1-4]
\arrow["{\Uparrow \nu_2*f_1 u(u_2)*\nu_1 \, }"{description}, shift left=1, Rightarrow, draw=none, from=1-1, to=0]
\end{tikzcd}\]
\item the equality 2-cells as below are generic:
\[\begin{tikzcd}
B & {U(A)} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["n", from=1-1, to=1-2]
\arrow[Rightarrow, no head, from=2-1, to=1-2]
\end{tikzcd}\]
\end{itemize}
\end{definition}
\begin{remark}
The first two conditions correspond to \cite{walker2020lax} definition; however, the last item appears to be required for our purpose; it is not clear whether it can be deduced from the others.
\end{remark}
First of all, it is important to precise that such a factorization, which shall be call \emph{generic factorization}, is unique up to a unique equivalence and invertible 2-cells:
\begin{lemma}\label{uniqueness of generic factorization}
Let be $ (\nu_1, n_1, u_1)$ and $ (\nu_2, n_2, u_2)$ two factorizations of a same $f : B \rightarrow U(A)$. Then we have an equivalence unique up to a unique invertible 2-cell $ e : A_1 \simeq A_2$ in $\mathcal{A}$ together with invertible 2-cells $ n_1 \simeq U(e)n_2$, $ u_2 = u_1e $.
\end{lemma}
\begin{proof}
This is obtained by using respectively the generic property of $n_1 $ and $n_2$ at the invertible $\nu_2\nu_1^{-1} $ and $ \nu_1^{-1}\nu_2 $ as below
\[\begin{tikzcd}
B & {U(A_2)} \\
{U(A_1)} & {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_1}"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_2}", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u_1)}"', from=2-1, to=2-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u_2)}", from=1-2, to=2-2]
\arrow[from=1-1, to=2-2]
\arrow["{ \Uparrow {\nu_2 \atop\simeq}}"{description}, shift left=1, Rightarrow, draw=none, from=3, to=1]
\arrow["{{\nu_1^{-1} \atop \simeq} \Uparrow}"{description}, shift right=1, Rightarrow, draw=none, from=2, to=0]
\end{tikzcd} \hskip1cm \begin{tikzcd}
B & {U(A_1)} \\
{U(A_2)} & {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_2}"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_1}", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u_2)}"', from=2-1, to=2-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u_1)}", from=1-2, to=2-2]
\arrow[from=1-1, to=2-2]
\arrow["{{\nu_2^{-1} \atop \simeq} \Uparrow}"{description}, shift right=1, Rightarrow, draw=none, from=2, to=0]
\arrow["{ \Uparrow {\nu_1 \atop\simeq}}"{description}, shift left=1, Rightarrow, draw=none, from=3, to=1]
\end{tikzcd} \]
Then one gets two factorizations of $ n_1$ through generic morphisms
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A_1)}
\arrow["{n_1}"', from=1-1, to=2-1]
\arrow["{n_1}", from=1-1, to=1-2]
\arrow[Rightarrow, no head, from=2-1, to=1-2]
\end{tikzcd} \hskip1cm
\begin{tikzcd}
B && {U(A_1)} \\
& {U(A_2)} \\
{U(A_1)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_1}"', from=1-1, to=3-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_1}", from=1-1, to=1-3]
\arrow["{n_2}"{description}, from=1-1, to=2-2]
\arrow["{U(w_{\nu_1\nu_2^{-1} })}"', from=3-1, to=2-2]
\arrow["{U(w_{\nu_2\nu_1^{-1} })}"', from=2-2, to=1-3]
\arrow["{\nu_{\nu_1\nu_2^{-1} } \atop \simeq}"{description}, Rightarrow, draw=none, from=0, to=2-2]
\arrow["{\nu_{\nu_2\nu_1^{-1} } \atop \simeq}"{description}, Rightarrow, draw=none, from=1, to=2-2]
\end{tikzcd}\]
which are related (by functoriality of the diagonalizations relatively to the morphisms of lax squares) by 2-cellular factorizations
\[\begin{tikzcd}
B && {U(A_1)} \\
\\
{U(A_1)}
\arrow["{n_1}"', from=1-1, to=3-1]
\arrow["{n_1}", from=1-1, to=1-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\nu_1^{-1}\nu_2 }w_{\nu_2^{-1}\nu_1 })}"', curve={height=18pt}, from=3-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, Rightarrow, no head, from=3-1, to=1-3]
\arrow["\xi"', shorten <=3pt, shorten >=3pt, Rightarrow, from=1, to=0]
\end{tikzcd} = \begin{tikzcd}[sep=large]
B && {U(A_1)} \\
\\
{U(A_1)}
\arrow["{n_1}"', from=1-1, to=3-1]
\arrow["{n_1}", from=1-1, to=1-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\nu_2\nu_1^{-1} }w_{\nu_1\nu_2^{-1} })}"', from=3-1, to=1-3]
\arrow["{\nu_{\nu_2\nu_1^{-1}}\nu_{\nu_1\nu_2^{-1} } \atop\simeq}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\end{tikzcd}\]
\[\begin{tikzcd}[sep=large]
B && {U(A_1)} \\
\\
{U(A_1)}
\arrow["{n_1}"', from=1-1, to=3-1]
\arrow["{n_1}", from=1-1, to=1-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\nu_2\nu_1^{-1} }w_{\nu_1\nu_2^{-1} })}"{description}, from=3-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, curve={height=30pt}, Rightarrow, no head, from=3-1, to=1-3]
\arrow["\scriptsize{\nu_{\nu_2\nu_1^{-1}}\nu_{\nu_1\nu_2^{-1} } \atop\simeq}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\arrow["\zeta"', shorten <=4pt, shorten >=4pt, Rightarrow, from=0, to=1]
\end{tikzcd} = \begin{tikzcd}
B & {U(A_1)} \\
{U(A_1)}
\arrow["{n_1}"', from=1-1, to=2-1]
\arrow["{n_1}", from=1-1, to=1-2]
\arrow[Rightarrow, no head, from=2-1, to=1-2]
\end{tikzcd} \]
But then by genericity, those comparison morphisms both have retracts, which entails their invertibility: hence the composite $ w_{\nu_2\nu_1^{-1} }w_{\nu_1\nu_2^{-1} }$ is an equivalence. A same argument proves that the composite $ w_{\nu_1\nu_2^{-1} }w_{\nu_2\nu_1^{-1} }$ is also an equivalence.
\end{proof}
Though the following result is immediate by the retraction properties in the definition of lax generic 2-cells, it is worth visualizing it once for all as we are going to use it later:
\begin{lemma}\label{mutually factorizing generics}
Two lax generic 2-cells that factorize themselves mutually are equivalent: if one has
\[\begin{tikzcd}[sep=large]
B & {U(A')} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["f", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(u_1)}"', curve={height=24pt}, from=2-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(u_2)}"{description}, from=2-1, to=1-2]
\arrow["{\Uparrow \nu_2}"{description}, Rightarrow, draw=none, from=1-1, to=1]
\arrow["{U(\zeta)}", shorten <=2pt, shorten >=2pt, Rightarrow, from=0, to=1]
\end{tikzcd} = \begin{tikzcd}
B & {U(A')} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["f", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(u_1)}"', from=2-1, to=1-2]
\arrow["{\Uparrow \nu_1}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\end{tikzcd}\]
\[\begin{tikzcd}[sep=large]
B & {U(A')} \\
{U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(u_1)}"{description}, from=2-1, to=1-2]
\arrow["n"', from=1-1, to=2-1]
\arrow["f", from=1-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(u_2)}"', curve={height=24pt}, from=2-1, to=1-2]
\arrow["{\Uparrow \nu_1}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\arrow["{U(\xi)}", shorten <=2pt, shorten >=2pt, Rightarrow, from=1, to=0]
\end{tikzcd} = \begin{tikzcd}
B & {U(A')} \\
{U(A)}
\arrow["n"', from=1-1, to=2-1]
\arrow["f", from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(u_2)}"', from=2-1, to=1-2]
\arrow["{\Uparrow \nu_2}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\end{tikzcd}\]
then in fact $ \zeta$ and $\xi$ are mutual inverses so that $ u_1 \simeq u_2$.
\end{lemma}
\begin{proposition}\label{universal property of the lax generic factorization}
If $ U$ is lax familial, then for a 1-cell $ f : B \rightarrow U(A)$ and any 2-cell of the form
\[\begin{tikzcd}
B && {U(A)} \\
& {U(A')}
\arrow[""{name=0, anchor=center, inner sep=0}, "f", from=1-1, to=1-3]
\arrow["g"', from=1-1, to=2-2]
\arrow["{U(u)}"', from=2-2, to=1-3]
\arrow["{\sigma \Downarrow}"{description}, Rightarrow, draw=none, from=2-2, to=0]
\end{tikzcd}\]
there exists a triple $(m_\sigma, \lambda_\sigma, \rho_\sigma)$, unique up to unique invertible 2-cell, such that $ \sigma$ decomposes as the following pasting
\[\begin{tikzcd}[sep=large]
B & {U(A_f)} & {U(A)} \\
& {U(A')}
\arrow[""{name=0, anchor=center, inner sep=0}, "g"', from=1-1, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(u)}"', from=2-2, to=1-3]
\arrow["{n_f}"{description}, from=1-1, to=1-2]
\arrow["{U(u_f)}"{description}, from=1-2, to=1-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "f", bend left=30, from=1-1, to=1-3]
\arrow["{m_\sigma}"{description}, from=1-2, to=2-2]
\arrow["{\nu_f \atop \simeq}"{description}, Rightarrow, draw=none, from=2, to=1-2]
\arrow["{\nu_\sigma \Downarrow}"{description}, Rightarrow, draw=none, from=0, to=1-2]
\arrow["{\Downarrow U(\omega_\sigma) }"{description}, Rightarrow, draw=none, from=1, to=1-2]
\end{tikzcd}\]
\end{proposition}
\begin{proof}
Apply the property of the lax generic part of $f$ to get an lax diagonalization of the lax square
\[\begin{tikzcd}
B & {U(A')} \\
{U(A_f)} & {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_f}"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "g", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}", from=1-2, to=2-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u_f)}"', from=2-1, to=2-2]
\arrow["f"{description}, from=1-1, to=2-2]
\arrow["{\sigma \Uparrow}"{description}, shift left=1, Rightarrow, draw=none, from=2, to=1]
\arrow["{\nu_f \atop \simeq}"{description}, shift left=1, Rightarrow, draw=none, from=0, to=3]
\end{tikzcd}\]
\end{proof}
\begin{lemma}\label{post composing in the range of U preserves the generic part of laxfact}
Let be $ f : B \rightarrow U(A)$ and $ u : A \rightarrow A'$ in $\mathcal{A}$. Then $f$ and $ U(u)f$ have the same lax generic part up to a unique equivalence.
\end{lemma}
\begin{proof}
The lax generic factorization of the composite produces a factorization of the following invertible 2-cell by genericity of $n_{U(u)f}$
\[\begin{tikzcd}[sep=large]
B & {U(A)} \\
{U(A_{U(u)f})} & {U(A')}
\arrow[""{name=0, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(u)}", from=1-2, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{n_{U(u)f}}"', from=1-1, to=2-1]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u_{U(u)f})}"', from=2-1, to=2-2]
\arrow["{U(w_{\nu_{U(u)f}})}"{description}, from=2-1, to=1-2]
\arrow["{\nu_{\nu_{U(u)f}} \atop \simeq}"{description}, Rightarrow, draw=none, from=0, to=2]
\arrow["{U(\omega_{U(u)f}) \atop \simeq}"{description}, Rightarrow, draw=none, from=1, to=3]
\end{tikzcd}\]
This provides an invertible, generic lax 2-cell factorizing $ f$ through the generic $ n_{U(u)f}$. Hence we have two generic factorizations of $f$ related by an invertible 2-cell
\[\begin{tikzcd}
B & {U(A_{U(u)f})} \\
{U(A_f)} & {U(A)}
\arrow["{n_{U(u)f}}", from=1-1, to=1-2]
\arrow["{U(w_{\nu_{U(u)f}})}", from=1-2, to=2-2]
\arrow["{U(u_f)}"', from=2-1, to=2-2]
\arrow["{n_f}"', from=1-1, to=2-1]
\arrow["{\nu_{\nu_{U(u)f}} \nu_f^{-1} \atop \simeq}"{description}, draw=none, from=1-1, to=2-2]
\end{tikzcd}\]
which entails equivalence by \cref{uniqueness of generic factorization}.
\end{proof}
\begin{comment}
\begin{lemma}
For any $f$, {the 2-cell $ \nu_f$ is lax $U$-generic }. More generally, any invertible 2-cell $ \alpha$ of the form
\[\begin{tikzcd}
B & {U(A_2)} \\
{U(A_1)}
\arrow[""{name=0, anchor=center, inner sep=0}, "n"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow["{U(u)}"', from=2-1, to=1-2]
\arrow["{\alpha \atop \simeq}"{description}, Rightarrow, draw=none, from=1, to=0]
\end{tikzcd}\]
with $n$ lax generic is itself an lax generic 2-cell and can be chosen as the lax generic factorization of $f$.
\end{lemma}
\begin{proof}
Let be $f : B \rightarrow U(A)$, and a factorization of $\nu_f$ as below
\[\begin{tikzcd}
B & {U(A)} \\
{U(A_f)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_f}"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow["{U(u_f)}"', from=2-1, to=1-2]
\arrow["{\nu_f \atop \simeq}"{description}, Rightarrow, draw=none, from=1, to=0]
\end{tikzcd} =
\begin{tikzcd}
B & {U(A)} \\
{U(A_f)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_f}"', from=1-1, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "f", from=1-1, to=1-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u_f)}"', curve={height=18pt}, from=2-1, to=1-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u)}"{description}, from=2-1, to=1-2]
\arrow["{U(\omega)}", shorten <=3pt, shorten >=3pt, Rightarrow, from=2, to=3]
\arrow["{\sigma \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\end{tikzcd}
\]
Then by \cref{universal property of the lax generic factorization} one has a converse factorization
If now one has $ \omega_1, \omega_2 : u \Rightarrow u_f$ in $ \mathcal{A}$, such that $ \nu_f n_f*U(\omega_1) = \nu_f n_f*U(\omega_2)$, then as $\nu_f$ is invertible, this forces $ n_f*U(\omega_1) = n_f*U(\omega_2) $: hence, defining a same lax 2-cell $ U(u)n_f \Rightarrow f$, they induces the same factorizations under the property of the lax factorization, that is
\[ (m_{ \nu_f n_f*U(\omega_1)}, \nu_{\nu_f n_f*U(\omega_1)}, \omega_{\nu_f n_f*U(\omega_1)} ) = (m_{ \nu_f n_f*U(\omega_2)}, \nu_{\nu_f n_f*U(\omega_2)}, \omega_{\nu_f n_f*U(\omega_2)} )\]
But from
which in particular enforces that
\[ \omega = \]
Suppose $\alpha$ is as above an invertible 2-cell with $n$ generic. Then for any factorization of $\alpha$
\end{proof}
\end{comment}
The following observation is immediate from the property of the lax generic part of the codomain arrow:
\begin{proposition}
For any 2-cell as below
\[\begin{tikzcd}
B && {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_1}", start anchor=40, bend left=30, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{f_2}"', start anchor=-40, bend right=30, from=1-1, to=1-3]
\arrow["{\Downarrow \sigma}"{description}, Rightarrow, draw=none, from=0, to=1]
\end{tikzcd}\]
the generic factorizations of $f_1 $ and $ f_2$ are related by the following decomposition of $ \sigma$
\[\begin{tikzcd}
& {U(A_1)} \\
B && {U(A)} \\
& {U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_{A_1}}", from=2-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_{A_2}}"', from=2-1, to=3-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u_{f_1})}", from=1-2, to=2-3]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u_{f_2})}"', from=3-2, to=2-3]
\arrow["{U(m_\sigma)}"{description}, from=1-2, to=3-2]
\arrow["{\Downarrow\nu_{\sigma} }"{description}, Rightarrow, draw=none, from=0, to=1]
\arrow["{\Downarrow U(\omega_\sigma) }"{description}, Rightarrow, draw=none, from=2, to=3]
\end{tikzcd}\]
Moreover, for any composable 2-cells as below
\[\begin{tikzcd}
B && {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{f_2}"{description}, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{f_1}", start anchor=40, bend left = 35, from=1-1, to=1-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{f_3}"', start anchor=-40, bend right = 35, from=1-1, to=1-3]
\arrow["{\Downarrow \sigma}"{description}, Rightarrow, draw=none, from=1, to=0]
\arrow["{\Downarrow \sigma'}"{description}, Rightarrow, draw=none, from=0, to=2]
\end{tikzcd}\]
we have the following relations between the generic data
\[ w_{\sigma'\sigma} \simeq w_{\sigma'}w_{\sigma} \hskip1cm
\nu_{\sigma'\sigma} = \nu_{\sigma'}w_{\sigma'} * \nu_{\sigma} \hskip1cm
\omega_{\sigma'\sigma} = \omega_{\sigma'}w_{\sigma'} * \omega_{\sigma}\]
\end{proposition}
\begin{proof}
The first item is immediate by the property of the lax generic part of the lax generic factorization. For the second item, observe that we can find two alternative factorizations of the following square
\[\begin{tikzcd}
B & {U(A_{f_3})} \\
{U(A_{f_1})} & {U(A)}
\arrow["{n_{f_1}}"', from=1-1, to=2-1]
\arrow["{n_{f_3}}", from=1-1, to=1-2]
\arrow["{U(u_{f_3})}", from=1-2, to=2-2]
\arrow["{U(u_{f_1})}"', from=2-1, to=2-2]
\arrow["{\nu_{f_3}^{-1}\sigma'\sigma\nu_{f_1} \Uparrow}"{description}, draw=none, from=1-1, to=2-2]
\end{tikzcd}\]
(which we shall denote abusively as $\sigma'\sigma$ for concision) provided by
\[\begin{tikzcd}
B && {U(A_{f_3})} \\
\\
{U(A_{f_1})} && {U(A)}
\arrow["{n_{f_1}}"', from=1-1, to=3-1]
\arrow["{n_{f_3}}", from=1-1, to=1-3]
\arrow["{U(u_{f_3})}", from=1-3, to=3-3]
\arrow["{U(u_{f_1})}"', from=3-1, to=3-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\sigma'\sigma})}"{description}, from=3-1, to=1-3]
\arrow["{\Uparrow \nu_{\sigma'\sigma}}"{description}, Rightarrow, draw=none, from=1-1, to=0]
\arrow["{U(\omega_{\sigma'\sigma}) \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=3-3]
\end{tikzcd} = \begin{tikzcd}
B && {U(A_{f_3})} \\
& {U(A_{f_2})} \\
{U(A_{f_1})} && {U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_{f_1}}"', from=1-1, to=3-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_{f_3}}", from=1-1, to=1-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u_{f_3})}", from=1-3, to=3-3]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(u_{f_1})}"', from=3-1, to=3-3]
\arrow["{n_{f_2}}"{description}, from=1-1, to=2-2]
\arrow["{U(w_{\sigma})}"{description}, from=3-1, to=2-2]
\arrow["{U(w_{\sigma'})}"{description}, from=2-2, to=1-3]
\arrow["{U(u_{f_2})}"{description}, from=2-2, to=3-3]
\arrow["{\nu_{\sigma} \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=2-2]
\arrow["{\nu_{\sigma'} \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=2-2]
\arrow["{U(\omega_{\sigma'}) \Uparrow}"{description}, Rightarrow, draw=none, from=2, to=2-2]
\arrow["{U(\omega_{\sigma}) \Uparrow}"{description}, Rightarrow, draw=none, from=3, to=2-2]
\end{tikzcd}\]
By the universal condition in the lax generic property of $n_{f_3}$, we know there exist two 2-cells $ \xi$ and $\zeta$ in $\mathcal{A}$ related by the mutual factorizations below
\[\begin{tikzcd}[sep=large]
& {U(A_{f_3})} \\
B & {U(A_{f_2})} \\
& {U(A_{f_1})}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_{f_3}}", from=2-1, to=1-2]
\arrow["{n_{f_2}}"{description}, from=2-1, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_{f_1}}"', from=2-1, to=3-2]
\arrow["{U(w_{\sigma'})}"{description}, from=2-2, to=1-2]
\arrow["{U(w_{\sigma})}"{description}, from=3-2, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(w_{\sigma'\sigma})}"', curve={height=40pt}, from=3-2, to=1-2]
\arrow["{\nu_{\sigma'} \Uparrow}"{description}, Rightarrow, draw=none, from=0, to=2-2]
\arrow["{\nu_\sigma \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=2-2]
\arrow["{U(\xi)}"'{pos=0.7}, shorten <=3pt, Rightarrow, from=2, to=2-2]
\end{tikzcd} = \begin{tikzcd}
& {U(A_{f_3})} \\
B & {U(A_{f_2})} \\
& {U(A_{f_1})}
\arrow[""{name=0, anchor=center, inner sep=0}, "{n_{f_3}}", from=2-1, to=1-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{n_{f_1}}"', from=2-1, to=3-2]
\arrow["{U(w_{\sigma'})}"', from=2-2, to=1-2]
\arrow["{U(w_{\sigma})}"', from=3-2, to=2-2]
\arrow[from=2-1, to=2-2]
\arrow["{\nu_{\sigma'} \Uparrow}"{description}, Rightarrow, draw=none, from=2-2, to=0]
\arrow["{\nu_{\sigma} \Uparrow}"{description}, Rightarrow, draw=none, from=1, to=2-2]
\end{tikzcd} \]
\[\begin{tikzcd}
& {U(A_{f_3})} \\
B && {U(A_{f_2})} \\
& {U(A_{f_1})}
\arrow["{n_{f_3}}", from=2-1, to=1-2]
\arrow["{n_{f_1}}"', from=2-1, to=3-2]
\arrow["{U(w_{\sigma'})}"', from=2-3, to=1-2]
\arrow["{U(w_{\sigma})}"', from=3-2, to=2-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\sigma'\sigma})}"{description, pos=0.3}, from=3-2, to=1-2]
\arrow["{U(\zeta)} \atop \Leftarrow", shorten <=5pt, draw=none, from=0, to=2-3]
\arrow["{\nu_{\sigma'\sigma} \Uparrow}"{description}, Rightarrow, draw=none, from=2-1, to=0]
\end{tikzcd}= \begin{tikzcd}
& {U(A_{f_3})} \\
B \\
& {U(A_{f_1})}
\arrow["{n_{f_3}}", from=2-1, to=1-2]
\arrow["{n_{f_1}}"', from=2-1, to=3-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(w_{\sigma'\sigma})}"', from=3-2, to=1-2]
\arrow["{\nu_{\sigma'\sigma} \Uparrow}"{description}, Rightarrow, draw=none, from=2-1, to=0]
\end{tikzcd}\]
But $ \nu_{\sigma'\sigma}$ is generic, while $ \nu_{\sigma'} U(w_\sigma) *\nu_{\sigma} $ also is generic by the composition condition in the definition of lax generic pseudofunctor. Hence by \cref{mutually factorizing generics} we have the desired equivalences.
\end{proof}
\subsection{Lax local right biadjoint}
Here we describe the lax version of the notion of local right biadjoint; however one should beware that it \emph{does not} correspond exactly to lax-familial pseudofunctors: we are going to see that factorization of 2-cells is more rigid than for general lax-generic 1-cells, so that globular 2-cells will have to factorize through the same local unit.
\begin{definition}
A pseudofunctor $ U : \mathcal{A} \rightarrow \mathcal{B}$ is said to be \emph{lax local right biadjoint} if for each object $A$ in $\mathcal{A}$, the restriction at the lax slice at $A$ has a left biadjoint
\[\begin{tikzcd}
{\mathcal{A}\Downarrow A} && {\mathcal{B}\Downarrow U(A)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{U_A}"', bend right=30, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{L_A}"', bend right=30, from=1-3, to=1-1]
\arrow["\dashv"{anchor=center, rotate=-90}, draw=none, from=1, to=0]
\end{tikzcd}\]
\end{definition}
\begin{remark}
Let us unravel this property: we still have a biadjunction (beware that we do not relax it into a lax-adjunction) at each lax slice, which manifests as the data of natural units and counits as above, except they are now lax 2-cell
\[\begin{tikzcd}
B & {} & {U(A_f)} \\
& {U(A)}
\arrow["{h^A_f}", from=1-1, to=1-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "f"', from=1-1, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{UL_A(f)}", from=1-3, to=2-2]
\arrow["{\huge{\eta^A_f \atop \Rightarrow}}"{description}, shorten <=7pt, shorten >=7pt, Rightarrow, draw=none, from=2-2, to=1-2]
\end{tikzcd}
\hskip1cm
\begin{tikzcd}
{A_{U(u)}} & {} & {A'} \\
& A
\arrow["{e^A_u}", from=1-1, to=1-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{L_A(U(u))}"', from=1-1, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "u", from=1-3, to=2-2]
\arrow["{\huge{\epsilon^A_u \atop \Rightarrow}}"{description}, Rightarrow, draw=none, from=2-2, to=1-2]
\end{tikzcd}\]
where the unit has the universal property that any 1-cell $ (g,\alpha) : f \rightarrow U(u)$ in $ \mathcal{B}/U(A)$ decomposes as the following pasting
\[\begin{tikzcd}[sep=large]
B & {U(A_f)} && {U(A_{U(u)})} & {U(A')} \\
&& {U(A)}
\arrow["{UL_A(U(u))}"{description}, from=1-4, to=2-3]
\arrow["{UL_A(f)}"{description}, from=1-2, to=2-3]
\arrow["{h^A_f}"{description}, from=1-1, to=1-2]
\arrow["{U(e^A_u)}"{description}, from=1-4, to=1-5]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(L_A(g))}"{description}, from=1-2, to=1-4]
\arrow[""{name=1, anchor=center, inner sep=0}, "f"', curve={height=12pt}, from=1-1, to=2-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}"{description}, curve={height=-12pt}, from=1-5, to=2-3]
\arrow[""{name=3, anchor=center, inner sep=0}, "g", curve={height=-30pt}, from=1-1, to=1-5]
\arrow["{\eta_f^A \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=1, to=1-2]
\arrow["{UL_A(\alpha) \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=2-3, to=0]
\arrow["{U(\epsilon^A_{u}) \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=1-4, to=2]
\arrow["{(\mathfrak{i}^A_{f,u})_{(g,\alpha)} \atop \simeq}"{description}, Rightarrow, draw=none, from=3, to=0]
\end{tikzcd} = \begin{tikzcd}[sep=small]
B && {U(A')} \\
& {U(A)}
\arrow["f"', from=1-1, to=2-2]
\arrow["{U(u)}", from=1-3, to=2-2]
\arrow[""{name=0, anchor=center, inner sep=0}, "g", from=1-1, to=1-3]
\arrow["{\alpha \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=0, to=2-2]
\end{tikzcd}\]
\end{remark}
\begin{remark}
In the lax context, we can now complete \cref{expression of the BC mate} with a naturality condition relatively to 2-cell. Let be a globular 2-cell $ \omega : u \Rightarrow v$ in $ \mathcal{A}$. We can now define a natural transformation $ \mathcal{A}\Downarrow u \Rightarrow \mathcal{A} \Downarrow v $ whose component at $ w : A' \rightarrow A $ is the triangular 2-cell encoding the whiskering along $\omega$
\[\begin{tikzcd}
{A'} && {A'} \\
& {A_2}
\arrow["uw"', from=1-1, to=2-2]
\arrow["vw", from=1-3, to=2-2]
\arrow[""{name=0, anchor=center, inner sep=0}, Rightarrow, no head, from=1-1, to=1-3]
\arrow["{\omega*w \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=0, to=2-2]
\end{tikzcd}\]
and similarly for the component of $ \mathcal{B}/U(\omega)$. On the other hand pseudofunctoriality of $ U$ produces a composite 2-cell
\[\begin{tikzcd}[sep=huge]
{U(A')} && {U(A')} \\
& {U(A_1)} \\
& {U(A_2)}
\arrow["{U(w)}"{description}, from=1-1, to=2-2]
\arrow["{U(w)}"{description}, from=1-3, to=2-2]
\arrow[""{name=0, anchor=center, inner sep=0}, curve={height=-18pt}, Rightarrow, no head, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(v)}"{description}, curve={height=-12pt}, from=2-2, to=3-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}"{description}, curve={height=12pt}, from=2-2, to=3-2]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(uw)}"', curve={height=18pt}, from=1-1, to=3-2]
\arrow[""{name=4, anchor=center, inner sep=0}, "{U(1_A)}"{description}, from=1-1, to=1-3]
\arrow[""{name=5, anchor=center, inner sep=0}, "{U(vw)}", curve={height=-18pt}, from=1-3, to=3-2]
\arrow["{\alpha_{u,w} \atop \simeq}"{description}, Rightarrow, draw=none, from=3, to=2-2]
\arrow["{U(1_{w}) \atop \simeq}"{description}, Rightarrow, draw=none, from=4, to=2-2]
\arrow["{\alpha_A \atop \simeq}"{description}, shorten <=2pt, shorten >=2pt, Rightarrow, from=0, to=4]
\arrow["{U(\omega) \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=2, to=1]
\arrow["{\alpha_{v,w} \atop \simeq}"{description}, Rightarrow, draw=none, from=2-2, to=5]
\end{tikzcd}\]
which is the effect of $U_{A_2}$ on the 1-cell $( \mathcal{A}\Downarrow \omega)_w$.\\
Now let be $ f : B \rightarrow U(A_1)$; let us denote as
\[\begin{tikzcd}
{A_{U(u)f}} && {A_{U(v)f}} \\
& {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "{L_{A_2}(U(u)f)}"', from=1-1, to=2-2]
\arrow["{w_{L_{A_2}(U(w))}}", from=1-1, to=1-3]
\arrow["{L_{A_2}(U(v)f)}"{pos=0.7}, from=1-3, to=2-2]
\arrow["{\omega_{L_{A_2}(U(w))} \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=0, to=1-3]
\end{tikzcd}\]
the 2-cell in $ \mathcal{A}$ the whiskering along $ U(\omega)$ is sent to by the local left adjoint $ L_{A_2}$. Observe we have the following 2-dimensional equality provided by pseudofunctoriality of $U$
\[\begin{tikzcd}[row sep=large]
B && {U(A_f)} && {U(A_f)} \\
& {U(A_1)} \\
&& {U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "f"', from=1-1, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{UL_{A_1}(f)}"{description}, from=1-3, to=2-2]
\arrow["{h^{A_1}_f}", from=1-1, to=1-3]
\arrow["{U(u)}"', from=2-2, to=3-3]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(uL_{A_1}(f))}"{description, pos=0.6}, from=1-3, to=3-3]
\arrow[Rightarrow, no head, from=1-3, to=1-5]
\arrow[""{name=3, anchor=center, inner sep=0}, "{U(vL_{A_1}(f))}", from=1-5, to=3-3]
\arrow["{\eta^{A_1}_f \atop \Rightarrow}", Rightarrow, draw=none, from=0, to=1]
\arrow["{\alpha_{u, L_{A_1}(f)} \atop \simeq}", Rightarrow, draw=none, from=2-2, to=2]
\arrow["{U(\omega *L_{A_1}(f)) \atop \Rightarrow}"{description}, shift left=4, Rightarrow, draw=none, from=2, to=3]
\end{tikzcd}=
\begin{tikzcd}[row sep=large]
B & B && {U(A_f)} \\
{U(A_1)} & {U(A_1)} \\
& {U(A_2)}
\arrow["f"', from=1-1, to=2-1]
\arrow["{U(u)}"', from=2-1, to=3-2]
\arrow["{h^{A_1}_f}", from=1-2, to=1-4]
\arrow[Rightarrow, no head, from=1-1, to=1-2]
\arrow[""{name=0, anchor=center, inner sep=0}, Rightarrow, no head, from=2-1, to=2-2]
\arrow["U(v)"{description}, from=2-2, to=3-2]
\arrow["f"{description}, from=1-2, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{UL_{A_1}(f)}"{description}, from=1-4, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(vL_{A_1}(f))}", curve={height=-12pt}, from=1-4, to=3-2]
\arrow["{U(\omega) \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=3-2, to=0]
\arrow["{\alpha_{v, L_{A_1}(f)} \atop \simeq}"{description}, Rightarrow, draw=none, from=2-2, to=2]
\arrow["{\eta^{A_1}_f \atop \Rightarrow}"{description, pos=0.6}, Rightarrow, draw=none, from=1, to=1-2]
\end{tikzcd}\]
This equality is sent by the pseudofunctor $L_{A_2}$ to an invertible 2-cell in the lax slice over $A_2$
\[\begin{tikzcd}[sep=huge]
{A_{U(u)f}} & {A_f} & {A_f} \\
& {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "{L_{A_2}(U(u)f)}"', from=1-1, to=2-2]
\arrow["{s^u_f}", from=1-1, to=1-2]
\arrow["{uL_{A_1}(f)}"{description, pos=0.6}, from=1-2, to=2-2]
\arrow[Rightarrow, no head, from=1-2, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{vL_{A_1}(f)}", from=1-3, to=2-2]
\arrow["{\sigma^u_f \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=0, to=1-2]
\arrow["{\omega*L_{A_1}(f) \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=1, to=1-2]
\end{tikzcd}
= \begin{tikzcd}[sep=huge]
{A_{U(u)f}} & {A_{U(v)f}} & {A_f} \\
& {A_2}
\arrow[""{name=0, anchor=center, inner sep=0}, "{L_{A_2}(U(u)f)}"', from=1-1, to=2-2]
\arrow["{w_{L_{A_2}( U(\omega))}}", from=1-1, to=1-2]
\arrow["{L_{A_2}(U(v)f)}"{description, pos=0.6}, from=1-2, to=2-2]
\arrow["{s^v_f}", from=1-2, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{vL_{A_1}(f)}", from=1-3, to=2-2]
\arrow["{\omega_{L_{A_2}( U(\omega))} \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=0, to=1-2]
\arrow["{\sigma^v_f \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=1, to=1-2]
\end{tikzcd}\]
One can show this equality to be natural in $f$, producing the 2-dimensional data of the Beck-Chevalley mate
\[\begin{tikzcd}[sep=huge]
{\mathcal{A}\Downarrow A_1} & {\mathcal{B}/U(A_1)} \\
{\mathcal{A}\Downarrow A_2} & {\mathcal{B}/U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{\mathcal{A}\Downarrow v}"', curve={height=30pt}, from=1-1, to=2-1]
\arrow["{L_{A_1}}"', from=1-2, to=1-1]
\arrow["{L_{A_2}}", from=2-2, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{\mathcal{B}/U(u)}", from=1-2, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{\mathcal{A}\Downarrow u}"{description}, curve={height=-30pt}, from=1-1, to=2-1]
\arrow["{\sigma^u \atop \Leftarrow}"{description}, Rightarrow, draw=none, from=1, to=2]
\arrow["{\mathcal{A}\Downarrow \omega \atop \Leftarrow}"{description}, shorten <=4pt, shorten >=4pt, Rightarrow, draw=none, from=2, to=0]
\end{tikzcd}
= \begin{tikzcd}[sep=huge]
{\mathcal{A}\Downarrow A_1} & {\mathcal{B}/U(A_1)} \\
{\mathcal{A}\Downarrow A_2} & {\mathcal{B}/U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "{\mathcal{A}\Downarrow v}"', from=1-1, to=2-1]
\arrow["{L_{A_1}}"', from=1-2, to=1-1]
\arrow["{L_{A_2}}", from=2-2, to=2-1]
\arrow[""{name=1, anchor=center, inner sep=0}, "{\mathcal{B}/U(v)}"{description}, curve={height=30pt}, from=1-2, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{\mathcal{B}/U(u)}", curve={height=-30pt}, from=1-2, to=2-2]
\arrow["{\mathcal{B}/U(\omega) \atop \Leftarrow}"{description}, Rightarrow, draw=none, from=2, to=1]
\arrow["{\sigma^v \atop \Leftarrow}"{description}, Rightarrow, draw=none, from=1, to=0]
\end{tikzcd}\]
\end{remark}
This ``repairs" the incomplete Beck-Chevalley condition of local right biadjoints. Beware however that in general lax-local right biadjoint are not local right bi-adjoints for the local unit 2-cell are not necessarily invertible: such a condition must be stipulated.
\begin{definition}
A lax-local right biadjoint shall be said \emph{coherent} if it restricts to a local right biadjoint on pseudoslices, that is, if for any $A$ the left adjoint $ L_A$ of $U\Downarrow A$ restricts to a left adjoint of $ U/A$ along the inclusion of the pseudoslice, that is, if we have a pseudonatural equivalence
\[\begin{tikzcd}
{\mathcal{A}/A} & {\mathcal{B}/U(A)} \\
{\mathcal{A}\Downarrow A} & {\mathcal{B}\Downarrow U(A)}
\arrow["{\iota_A}"', hook, from=1-1, to=2-1]
\arrow["{L_A}"', dashed, from=1-2, to=1-1]
\arrow["{\iota_{U(A)}}", hook, from=1-2, to=2-2]
\arrow["{L_A}", from=2-2, to=2-1]
\arrow["\simeq"{description}, draw=none, from=2-2, to=1-1]
\end{tikzcd}\]
\end{definition}
As the pseudoslice contains all objects of the lax-slice, this ensures that all local units $ (h^A_f, \eta^A_f)$ actually lie in the pseudoslices, that is, that the 2-cell $ \eta^A_f$ are invertible for each $ f: B \rightarrow U(A)$ at each $A$.
\begin{proposition}
If $U$ is a coherent lax-local right biadjoint, and $ n : B \rightarrow U(A)$ is such that $ n \simeq h^A_n$. Then $ n$ is lax-generic, but moreover, in a lax square as below
\[\begin{tikzcd}
B & {U(A_1)} \\
{U(A)} & {U(A_2)}
\arrow["n"', from=1-1, to=2-1]
\arrow["{U(v)}"', from=2-1, to=2-2]
\arrow["g", from=1-1, to=1-2]
\arrow["{U(u)}", from=1-2, to=2-2]
\arrow["{\Uparrow \sigma}"{description}, draw=none, from=2-1, to=1-2]
\end{tikzcd}\]
the left part lax diagonalization is invertible.
\end{proposition}
\begin{proof}
A lax square as above is like a lax-cell
\[\begin{tikzcd}
B && {U(A_1)} \\
& {U(A_2)}
\arrow["g", from=1-1, to=1-3]
\arrow[""{name=0, anchor=center, inner sep=0}, "{U(v)n}"', from=1-1, to=2-2]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(u)}", from=1-3, to=2-2]
\arrow["{\sigma \atop \Rightarrow}", Rightarrow, draw=none, from=0, to=1]
\end{tikzcd}\]
which factorizes by bi-adjointness as a pasting
\[\begin{tikzcd}[sep=huge]
B & {U(A_{U(v)n})} & {U(A_1)} \\
& {U(A_2)}
\arrow[""{name=0, anchor=center, inner sep=0}, "g", bend left=30, start anchor=45, from=1-1, to=1-3]
\arrow[""{name=1, anchor=center, inner sep=0}, "{U(v)n}"', from=1-1, to=2-2]
\arrow[""{name=2, anchor=center, inner sep=0}, "{U(u)}", from=1-3, to=2-2]
\arrow["{h^{A_2}_{U(v)n}}"{description}, from=1-1, to=1-2]
\arrow["{U(w_{(g,\sigma)})}"{description}, from=1-2, to=1-3]
\arrow[from=1-2, to=2-2]
\arrow["{\mathfrak{i}_{(g,\sigma)} \atop \simeq}"{description}, Rightarrow, draw=none, from=0, to=1-2]
\arrow["{\eta^{A_2}_{U(v)n} \atop \simeq}"{description}, Rightarrow, draw=none, from=1, to=1-2]
\arrow["{U(L_{A_2}(\omega_{(g,\sigma)})) \atop \Rightarrow}"{description}, Rightarrow, draw=none, from=1-2, to=2]
\end{tikzcd}\]
But now, for the factorization through local unit is essentially unique, this provides an invertible 2-cell
\[\begin{tikzcd}[sep=huge]
B && {U(A_1)} \\
{U(A)} && {U(A_2)}
\arrow["n"', from=1-1, to=2-1]
\arrow["g", from=1-1, to=1-3]
\arrow["U(w_{(g,\sigma)})"{description}, ""{name=0, anchor=center, inner sep=0}, from=2-1, to=1-3]
\arrow["{U(u)}", from=1-3, to=2-3]
\arrow["{U(v)}"', from=2-1, to=2-3]
\arrow["{\eta^{A_2}_{U(v)n})\mathfrak{i}_{(g,\sigma)} \atop \simeq}"{description, pos=0.6}, Rightarrow, draw=none, from=0, to=1-1]
\arrow["{ \scriptsize{\Uparrow U(L_{A_2}(\omega_{(g,\sigma)}))}}"{description}, Rightarrow, draw=none, from=0, to=2-3]
\end{tikzcd}\]
\end{proof}
\begin{remark}
Hence any 2-cell $\sigma$ factorizes as below
\[\begin{tikzcd}[sep=large]
B & {U(A_\sigma)} && {U(A)}
\arrow["{\eta^A_\sigma}"{description}, from=1-1, to=1-2]
\arrow["U(L_A(f_1))"{description, name=0, anchor=center, inner sep=0}, curve={height=-18pt}, from=1-2, to=1-4]
\arrow["U(L_A(f_2))"{description, name=1, anchor=center, inner sep=0}, curve={height=18pt}, from=1-2, to=1-4]
\arrow[""{name=2, anchor=center, inner sep=0}, "{f_1}", shift left=2, curve={height=-28pt}, from=1-1, to=1-4]
\arrow[""{name=3, anchor=center, inner sep=0}, "{f_2}"', shift right=2, curve={height=28pt}, from=1-1, to=1-4]
\arrow["{\Downarrow U(\omega_\sigma)}"{description}, Rightarrow, draw=none, from=0, to=1]
\arrow["{\eta^A_{f_1} \atop \simeq}"'{pos=0.9}, Rightarrow, draw=none, from=2, to=1-2]
\arrow["{\eta^A_{f_2} \atop \simeq}"{pos=0.9}, Rightarrow, draw=none, from=3, to=1-2]
\end{tikzcd}\]
This is a more rigid factorization than in lax-familial pseudofunctors. It is not known which lax-version of local adjointness they correspond to. This is neither the kind of factorization we get as examples.
\end{remark} | {"config": "arxiv", "file": "2108.12697/Lax_familial_pseudofunctors.tex"} |
\section{Quotient Ring of Cauchy Sequences is Normed Division Ring/Corollary 1}
Tags: Completion of Normed Division Ring
\begin{theorem}
Then $\struct {\CC \,\big / \NN, \norm {\, \cdot \,}_1 }$ is a [[Definition:Valued Field|valued field]].
\end{theorem}
\begin{proof}
By [[Quotient Ring of Cauchy Sequences is Normed Division Ring]] then $\CC \,\big / \NN$ is a [[Definition:Normed Division Ring|normed division ring]].
By [[Quotient Ring of Cauchy Sequences is Division Ring/Corollary 1|Corollary to Quotient Ring of Cauchy Sequences is Normed Division Ring]] then $\CC \,\big / \NN$ is a [[Definition:Field (Abstract Algebra)|field]].
The result follows.
{{qed}}
\end{proof}
| {"config": "wiki", "file": "thm_17321.txt"} |
\begin{document}
\sloppy
\maketitle
\begin{abstract}
Motives of Brauer-Severi schemes of Cayley-smooth algebras associated to homogeneous superpotentials are used to compute inductively the motivic Donaldson-Thomas invariants of the corresponding Jacobian algebras. This approach can be used to test the conjectural exponential expressions for these invariants, proposed in \cite{Cazz}. As an example we confirm the second term of the conjectured expression for the motivic series of the homogenized Weyl algebra.
\end{abstract}
\section{Introduction}
We fix a homogeneous degree $d$ superpotential $W$ in $m$ non-commuting variables $X_1,\hdots,X_m$. For every dimension $n \geq 1$, $W$ defines a regular functions, sometimes called the Chern-Simons functional
\[
Tr(W)~:~\mathbb{M}_{m,n} = \underbrace{M_n(\C) \oplus \hdots \oplus M_n(\C)}_m \rTo \C \]
obtained by replacing in $W$ each occurrence of $X_i$ by the $n \times n$ matrix n the $i$-th component, and taking traces.
We are interested in the (naive, equivariant) motives of the fibers of this functional which we denote by
\[
\mathbb{M}_{m,n}^W(\lambda) = Tr(W)^{-1}(\lambda). \]
Recall that to each isomorphism class of a complex variety $X$ (equipped with a good action of a finite group of roots of unity) we associate its naive equivariant motive $[X]$ which is an element in the ring $K_0^{\hat{\mu}}(\mathrm{Var}_{\C})[ \mathbb{L}^{-1/2}]$ (see \cite{Davison} or \cite{Cazz}) and is subject to the scissor- and product-relations
\[
[X]-[Z]=[X-Z] \quad \text{and} \quad [X].[Y]=[X \times Y] \]
whenever $Z$ is a Zariski closed subvariety of $X$. A special element is the Lefschetz motive $\mathbb{L}=[ \mathbb{A}^1_{\C}, id]$ and we recall from \cite[Lemma 4.1]{Morrison} that $[GL_n]=\prod_{k=0}^{n-1}(\mathbb{L}^n-\mathbb{L}^k)$ and from \cite[2.2]{Cazz} that $[\mathbb{A}^n,\mu_k]=\mathbb{L}^n$ for a linear action of $\mu_k$ on $\mathbb{A}^n$. This ring is equipped with a plethystic exponential $\wis{Exp}$, see for example \cite{Bryan} and \cite{Davison}.
The representation theoretic interest of the degeneracy locus $Z = \{ d Tr(W)=0 \}$ of the Chern-Simons functional is that it coincides with the scheme of $n$-dimensional representations
\[
Z = \wis{rep}_n(R_W) \quad \text{where} \quad R_W = \frac{\C \langle X_1,\hdots,X_m \rangle}{(\partial_{X_i}(W) : 1 \leq i \leq m)} \]
of the corresponding Jacobi algebra $R_W$ where $\partial_{X_i}$ is the cyclic derivative with respect to $X_i$.
As $W$ is homogeneous it follows from \cite[Thm. 1.3]{Davison} (or \cite{Behrend} if the superpotential allows 'a cut') that its virtual motive is equal to
\[
[ \wis{rep}_n(R_W) ]_{virt} = \mathbb{L}^{-\frac{mn^2}{2}}([\mathbb{M}_{m,n}^W(0)]-[\mathbb{M}_{m,n}^W(1)]) \]
where $\hat{\mu}$ acts via $\mu_d$ on $\mathbb{M}^W_{m,n}(1)$ and trivially on $\mathbb{M}^W_{m,n}(0)$.
These virtual motives can be packaged together into the motivic Donaldson-Thomas series
\[
U_W(t) = \sum_{n=0}^{\infty} \mathbb{L}^{- \frac{(m-1)n^2}{2}} \frac{[\mathbb{M}_{m,n}^W(0)]-[\mathbb{M}_{m,n}^W(1)]}{[GL_n]} t^n \]
In \cite{Cazz} A. Cazzaniga, A. Morrison, B. Pym and B. Szendr\"oi conjecture that this generating series has an exponential expression involving simple rational functions of virtual motives determined by representation theoretic information of the Jacobi algebra $R_W$
\[
U_W(t) \overset{?}{=} \wis{Exp}(- \sum_{i=1}^k \frac{M_i}{\mathbb{L}^{1/2}-\mathbb{L}^{-1/2}} \frac{t^{m_i}}{1-t^{m_i}}) \]
where $m_1=1,\hdots,m_k$ are the dimensions of simple representations of $R_W$ and $M_i \in \mathcal{M}_{\C}$ are motivic expressions without denominators, with $M_1$ the virtual motive of the scheme parametrizing (simple) $1$-dimensional representations. Evidence for this conjecture comes from cases where the superpotential admits a cut and hence one can use dimensional reduction, introduced by A. Morrison in \cite{Morrison}, as in the case of quantum affine three-space \cite{Cazz}.
The purpose of this paper is to introduce an inductive procedure to test the conjectural exponential expressions given in \cite{Cazz} in other interesting cases such as the homogenized Weyl algebra and elliptic Sklyanin algebras. To this end we introduce the following quotient of the free necklace algebra on $m$ variables
\[
\mathbb{T}_m^W(\lambda) = \frac{\C \langle X_1, \hdots, X_m \rangle \otimes \wis{Sym}(V_m)}{(W-\lambda)},~\text{where}~V_m = \frac{\C \langle X_1,\hdots,X_m \rangle}{[\C \langle X_1,\hdots,X_m \rangle,\C \langle X_1,\hdots,X_m \rangle]_{vect}} \]
is the vectorspace space having as a basis all cyclic words in $X_1,\hdots,X_m$. Note that any superpotential is an element of $\wis{Sym}(V_m)$. Substituting each $X_k$ by a generic $n \times n$ matrix and each cyclic word by the corresponding trace we obtain a quotient of the trace ring of $m$ generic $n \times n$ matrices
\[
\mathbb{T}_{m,n}^W(\lambda) = \frac{\mathbb{T}_{m,n}}{(Tr(W)-\lambda)} \quad \text{with} \quad \mathbb{M}_{m,n}^W(\lambda) = \wis{trep}_n(\mathbb{T}_{m,n}^W) \]
such that its scheme of trace preserving $n$-dimensional representations is isomorphic to the fiber $\mathbb{M}_{m,n}^W(\lambda)$. We will see that if $\lambda \not= 0$ the algebra $\mathbb{T}_{m,n}^W(\lambda)$ shares many ringtheoretic properties of trace rings of generic matrices, in particular it is a Cayley-smooth algebra, see \cite{LBbook}. As such one might hope to describe $\mathbb{M}_{m,n}^W(\lambda)$ using the Luna stratification of the quotient and its fibers in terms of marked quiver settings given in \cite{LBbook}. However, all this is with respect to the \'etale topology and hence useless in computing motives.
For this reason we consider the Brauer-Severi scheme of $\mathbb{T}_{m,n}^W(\lambda)$, as introduced by M. Van den Bergh in \cite{VdBBS} and further investigated by M. Reineke in \cite{ReinekeBS}, which are quotients of a principal $GL_n$-bundles and hence behave well with respect to motives. More precisely, the Brauer-Severi scheme of $\mathbb{T}_{m,n}^W(\lambda)$ is defined as
\[
\wis{BS}_{m,n}^W(\lambda) = \{ (v,\phi) \in \C^n \times \wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda)~|~\phi(\mathbb{T}_{m,n}^W(\lambda))v = \C^n \} / GL_n \]
and their motives determine inductively the motives in the Donaldson-Thomas series. In Proposition~\ref{induction} we will show that
\[
(\mathbb{L}^n-1) \frac{[\mathbb{M}^W_{m,n}(0)]-[\mathbb{M}^W_{m,n}(1)]}{[GL_n]} \]
is equal to
\[
[\wis{BS}^W_{m,n}(0)]-[\wis{BS}_{m,n}^W(1)] + \sum_{k=1}^{n-1} \frac{\mathbb{L}^{(m-1)k(n-k)}}{[GL_{n-k}]} ([\wis{BS}^W_{m,k}(0)]-[\wis{BS}^W_{m,k}(1)])([\mathbb{M}^W_{m,k}(0)]-[\mathbb{M}^W_{m,k}(1)]) \]
In section~4 we will compute the first two terms of $U_W(t)$ in the case of the quantized $3$-space in a variety of ways. In the final section we repeat the computation for the homogenized Weyl algebra and show that it coincides with the conjectured expression of \cite{Cazz}. In a forthcoming paper \cite{LBsuper} we will compute the first two terms of the series for elliptic Sklyanin algebras both in the generic case and the case of $2$-torsion points.
\vskip 4mm
{\em Acknowledgement : } I would like to thank Brent Pym for stimulating conversations concerning the results of \cite{Cazz} and Balazs Szendr\"oi for explaining the importance of the monodromy action (which was lacking in a previous version) and for sharing his calculations on the Exp-expressions of \cite{Cazz}. I am grateful to Ben Davison for pointing out a computational error in summing up the terms in the homogenized Weyl algebra case and explaining the equality with the conjectured motive.
\section{Brauer-Severi motives}
With $\mathbb{T}_{m,n}$ we will denote the {\em trace ring of $m$ generic $n \times n$ matrices}. That is, $\mathbb{T}_{m,n}$ is the $\C$-subalgebra of the full matrix-algebra $M_n(\C[x_{ij}(k)~|~1 \leq i,j \leq n, 1 \leq k \leq m])$ generated by the $m$ generic matrices
\[
X_k = \begin{bmatrix} x_{11}(k) & \hdots & x_{1n}(k) \\
\vdots & & \vdots \\
x_{n1}(k) & \hdots & x_{nn}(k) \end{bmatrix} \]
together with all elements of the form $Tr(M) 1_n$ where $M$ runs over all monomials in the $X_i$. These algebras have been studied extensively by ringtheorists in the 80ties and some of the results are summarized in the following result
\begin{proposition} Let $\mathbb{T}_{m,n}$ be the trace ring of $m$ generic $n \times n$ matrices, then
\begin{enumerate}
\item{$\mathbb{T}_{m,n}$ is an affine Noetherian domain with center $Z(\mathbb{T}_{m,n})$ of dimension $(m-1) n^2+1$ and generated as $\C$-algebra by the $Tr(M)$ where $M$ runs over all monomials in the generic matrices $X_k$.}
\item{$\mathbb{T}_{m,n}$ is a maximal order and a noncommutative UFD, that is all twosided prime ideals of height one are generated by a central element and $Z(\mathbb{T}_{m,n})$ is a commutative UFD which is a complete intersection if and only if $n=1$ or $(m,n)=(2,2),(2,3)$ or $(3,2)$.}
\item{$\mathbb{T}_{m,n}$ is a reflexive Azumaya algebra unless $(m,n)=(2,2)$, that is, every localization at a central height one prime ideal is an Azumaya algebra.}
\end{enumerate}
\end{proposition}
\begin{proof} For (1) see for example \cite{Procesi} or \cite{Razmyslov}. For (2) see for example \cite{LBAS}, for (3) for example \cite{LBQuiver}.
\end{proof}
A Cayley-Hamilton algebra of degree $n$ is a $\C$-algebra $A$ , equipped with a linear trace map $tr : A \rTo A$ satisfying the following properties:
\begin{enumerate}
\item{$tr(a).b = b. tr(a)$}
\item{$tr(a.b) = tr(b.a)$}
\item{$tr(tr(a).b) = tr(a).tr(b)$}
\item{$tr(a) = n$}
\item{$\chi_a^{(n)}(a)=0$ where $\chi_a^{(n)}(t)$ is the formal Cayley-Hamilton polynomial of degree $n$, see \cite{ProcesiCH}}
\end{enumerate}
For a Cayley-Hamilton algebra $A$ of degree $n$ it is natural to look at the scheme $\wis{trep}_n(A)$ of all {\em trace preserving} $n$-dimensional representations of $A$, that is, all trace preserving algebra maps $A \rTo M_n(\C)$. A Cayley-Hamilton algebra $A$ of degree $n$ is said to be a {\em smooth Cayley-Hamilton algebra} if $\wis{trep}_n(A)$ is a smooth variety. Procesi has shown that these are precisely the algebras having the smoothness property of allowing lifts modulo nilpotent ideals in the category of all Cayley-Hamilton algebras of degree $n$, see \cite{ProcesiCH}. The \'etale local structure of smooth Cayley-Hamilton algebras and their centers have been extensively studied in \cite{LBbook}.
\begin{proposition} Let $W$ be a homogeneous superpotential in $m$ variables and define the algebra
\[
\mathbb{T}_{m,n}^W(\lambda) = \frac{\mathbb{T}_{m,n}}{(Tr(W)-\lambda)} \quad \text{then} \quad \mathbb{M}_{m,n}^W(\lambda) = \wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda)) \]
If $Tr(W)-\lambda$ is irreducible in the UFD $Z(\mathbb{T}_{m,n})$, then for $\lambda \not= 0$
\begin{enumerate}
\item{$\mathbb{T}_{m,n}^W(\lambda)$ is a reflexive Azumaya algebra.}
\item{$\mathbb{T}_{m,n}^W(\lambda)$ is a smooth Cayley-Hamilton algebra of degree $n$ and of Krull dimension $(m-1)n^2$.}
\item{$\mathbb{T}_{m,n}^W(\lambda)$ is a domain.}
\item{The central singular locus is the the non-Azumaya locus of $\mathbb{T}_{m,n}^W(\lambda)$ unless $(m,n)=(2,2)$.}
\end{enumerate}
\end{proposition}
\begin{proof}
(1) : As $\mathbb{M}_{m,n}^W(\lambda)=\wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda))$ is a smooth affine variety for $\lambda \not= 0$ (due to homogeneity of $W$) on which $GL_n$ acts by automorphisms, we know that the ring of invariants,
\[
\C[\wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda))]^{GL_n} = Z(\mathbb{T}_{m,n}^W(\lambda)) \]
which coincides with the center of $\mathbb{T}_{m,n}^W(\lambda)$ by e.g. \cite[Prop. 2.12]{LBbook}, is a normal domain. Because the non-Azumaya locus of $\mathbb{T}_{m,n}$ has codimension at least $3$ (if $(m,n) \not= (2,2)$) by \cite{LBQuiver}, it follows that all localizations of $\mathbb{T}_{m,n}^W(\lambda)$ at height one prime ideals are Azumaya algebras. Alternatively, using (2) one can use the theory of local quivers as in \cite{LBbook}.
(2) : That the Cayley-Hamilton degree of the quotient $\mathbb{T}_{m,n}^W(\lambda)$ remains $n$ follows from the fact that $\mathbb{T}_{m,n}$ is a reflexive Azumaya algebra and irreducibility of $Tr(W)-\lambda$. Because $\mathbb{M}_{m,n}^W(\lambda)=\wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda))$ is a smooth affine variety, $\mathbb{T}_{m,n}^W(\lambda)$ is a smooth Cayley-Hamilton algebra. The statement on Krull dimension follows from the fact that the Krull dimension of $\mathbb{T}_{m,n}$ is known to be $(m-1)n^2+1$.
(3) : After taking determinants, this follows from factoriality of $Z(\mathbb{T}_{m,n})$ and irreducibility of $Tr(W)-\lambda$.
(4) : This follows from the theory of local quivers as in \cite{LBbook}. The most general non-simple representations are of representation type $(1,a;1,b)$ with the dimensions of the two simple representations $a,b$ adding up to $n$. The corresponding local quiver is
\[
\xymatrix{\vtx{1} \ar@2@/^2ex/[rr]^{(m-1)ab} \ar@{=>}@(ld,lu)^{(m-1)a^2+1} & & \vtx{1} \ar@2@/^2ex/[ll]^{(m-1)ab} \ar@{=>}@(ru,rd)^{(m-1)b^2} }
\]
and as $(m-1)ab \geq 2$ under the assumptions, it follows that the corresponding singular point is singular.
\end{proof}
Let us define for all $k \leq n$ and all $\lambda \in \C$ the locally closed subscheme of $\C^n \times \wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda))$
\[
\wis{X}_{k,n,\lambda} = \{ (v,\phi) \in \C^n \times \wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda))~|~dim_{\C}(\phi(\mathbb{T}_{m,n}^W(\lambda)).v) = k \} \]
Sending a point $(v,\phi)$ to the point in the Grassmannian $\wis{Gr}(k,n)$ determined by the $k$-dimensional subspace $V=\phi(\mathbb{T}_{m,n}^W(\lambda)).v \subset \C^n$ we get a Zariskian fibration as in \cite{Morrison}
\[
\wis{X}_{k,n,\lambda} \rOnto \wis{Gr}(k,n) \]
To compute the fiber over $V$ we choose a basis of $\C^n$ such that the first $k$ base vectors span $V=\phi(\mathbb{T}_{m,n}^W(\lambda)).v$. With respect to this basis, the images of the generic matrices $X_i$ all are of the following block-form
\[
\phi(X_i) = \begin{bmatrix} \phi_k(X_i) & \sigma(X_i) \\ 0 & \phi_{n-k}(X_i) \end{bmatrix} \quad \text{with} \quad \begin{cases} \phi_k(X_i) \in M_k(\C) \\ \phi_{n-k}(X_i) \in M_{n-k}(\C) \\ \sigma(X_i) \in M_{n-k \times k}(\C) \end{cases} \]
Using these matrix-form it is easy to see that
\[
Tr(\phi(W(X_1,\hdots,X_m)))= Tr(\phi_k(W(X_1,\hdots,X_m))) + Tr(\phi_{n-k}(W(X_1,\hdots,X_m))) \]
That is, if $\phi_k \in \wis{trep}_k(\mathbb{T}_{m,k}^W(\mu))$ then $\phi_{n-k} \in \wis{trep}(\mathbb{T}_{m,n-k}^W(\lambda-\mu))$ and moreover we have that $(v,\phi_k) \in \wis{X}_{k,k,\mu}$. Further, the $m$ matrices $\sigma(X_i) \in M_{n-k \times k}(\C)$ can be taken arbitrary. Rephrasing this in motives we get
\[
[ \wis{X}_{k,n,\lambda} ] = \mathbb{L}^{mk(n-k)} [ \wis{Gr}(k,n) ] \sum_{\mu \in \C} [ \wis{X}_{k,k,\mu} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(\lambda - \mu)) ] \]
Here the summation $\sum_{\mu \in \C}$ is shorthand for distinguishing between zero and non-zero values of $\mu$ and $\lambda-\mu$. For example, with $\sum_{\mu \in \C} [ \wis{X}_{k,k,\mu} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(\lambda - \mu)) ]$ we mean for $\lambda \not= 0$
\[
(\mathbb{L}-2)[ \wis{X}_{k,k,1} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(1)) ]+[ \wis{X}_{k,k,0} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(\lambda)) ]+[ \wis{X}_{k,k,\lambda} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(0)) ] \]
and when $\lambda=0$
\[
(\mathbb{L}-1)[ \wis{X}_{k,k,1} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(1)) ]+[ \wis{X}_{k,k,0} ] [ \wis{trep}_{n-k}(\mathbb{T}_{m,n-k}(0)) ]. \]
Further, we have
\[
[ \wis{Gr}(k,n) ] = \frac{[ GL_n ]}{[GL_k ] [GL_{n-k} ] \mathbb{L}^{k(n-k)}} \quad \text{and} \quad [\wis{X}_{k,k,\mu}] = [GL_k] [\wis{BS}_{m,k}^W(\mu)] \]
and substituting this in the above, and recalling that $\mathbb{M}_{m,l}^W(\alpha) = \wis{trep}_l(\mathbb{T}_{m,l}^W(\alpha))$, we get
\begin{proposition} \label{formula} With notations as before we have for all $0 < k < n$ and all $\lambda \in \C$ that
\[
[ \wis{X}_{k,n,\lambda} ] = [GL_n] \mathbb{L}^{(m-1)k(n-k)} \sum_{\mu \in \C} [ \wis{BS}_{m,k}^W(\mu) ] \frac{[\mathbb{M}_{m,n-k}^W(\lambda-\mu) ]}{[GL_{n-k} ]} \]
Further, we have
\[
[ \wis{X}_{0,n,\lambda} ] = [ \mathbb{M}_{m,n}^W(\lambda) ] \quad \text{and} \quad [ \wis{X}_{n,n,\lambda} ] = [GL_n] [ \wis{BS}_{m,n}^W(\lambda) ] \]
\end{proposition}
We can also express this in terms of generating series. Equip the commutative ring $\mathcal{M}_{\C}[[t]]$ with the modified product
\[
t^a \ast t^b = \mathbb{L}^{(m-1)ab} t^{a+b} \]
and consider the following two generating series for all $\frac{1}{2} \not= \lambda \in \C$
\[
\wis{B}_{\lambda}(t) = \sum_{n=1}^{\infty} [ \wis{BS}_{m,n}^W(\lambda) ] t^n \quad \text{and} \quad \wis{R}_{\lambda}(t) = \sum_{n=1}^{\infty} \frac{[ \mathbb{M}_{m,n}^W(\lambda) ]}{[ GL_n ]} t^n \]
\[
\wis{B}_{\frac{1}{2}}(t) = \sum_{n=0}^{\infty} [ \wis{BS}_{m,n}^W(\frac{1}{2}) ] t^n \quad \text{and} \quad \wis{R}_{\frac{1}{2}}(t) = \sum_{n=0}^{\infty} \frac{[ \mathbb{M}_{m,n}^W(\frac{1}{2}) ]}{[ GL_n ]} t^n \]
\begin{proposition} With notations as before we have the functional equation
\[
1+ \wis{R}_{1}(\mathbb{L} t) = \sum_{\mu} \wis{B}_{\mu}(t) \ast \wis{R}_{1-\mu}(t) \]
\end{proposition}
\begin{proof} The disjoint union of the strata of the dimension function on $\C^n \times \wis{trep}_n(\mathbb{T}_{m,n}^W(\lambda))$ gives
\[
\C^n \times \mathbb{M}_{m,n}^W(\lambda) = \wis{X}_{0,n,\lambda} \sqcup \wis{X}_{1,n,\lambda} \sqcup \hdots \sqcup \wis{X}_{n,n,\lambda} \]
Rephrasing this in terms of motives gives
\[
\mathbb{L}^n [ \mathbb{M}_{m,n}^W(\lambda) ] = [ \mathbb{M}_{m,n}^W(\lambda)] + \sum_{k=1}^{n-1} [ \wis{X}_{k,n,\lambda} ] + [GL_n][\wis{BS}_{m,n}^W(\lambda)] \]
and substituting the formula of proposition~\ref{formula} into this we get
\[
\frac{[\mathbb{M}_{m,n}^W(\lambda)]}{[GL_n]} \mathbb{L}^n t^n = \frac{[\mathbb{M}_{m,n}^W(\lambda)]}{[GL_n]} t^n + \]
\[
\sum_{k=1}^{n-1} \sum_{\mu \in \C} ([\wis{BS}_{m,k}^W(\mu)] t^k) \ast ( \frac{[ \mathbb{M}_{m,n-k}^W(\lambda-\mu) ]}{[ GL_{n-k} ]} t^{n-k}) + [\wis{BS}_{m,n}^W(\lambda) ] t^n \]
Now, take $\lambda = 1$ then on the left hand side we have the $n$-th term of the series $1+ \wis{R}_{1}(\mathbb{L} t)$ and on the right hand side we have the $n$-th factor of the series $\sum_{\mu} \wis{B}_{\mu}(t) \ast \wis{R}_{1 - \mu}(t)$. The outer two terms arise from the product $\wis{B}_{\frac{1}{2}}(t) \ast \wis{R}_{\frac{1}{2}}(t)$, using that $W$ is homogeneous whence for all $\lambda \not= 0$
\[
\wis{BS}_{m,n}^W(\lambda) \simeq \wis{BS}_{m,n}^W(1) \quad \text{and} \quad \mathbb{M}_{m,n}^W(\lambda) \simeq \mathbb{M}_{m,n}^W(1) \]
This finishes the proof.
\end{proof}
These formulas allow us to determine the motive $[ \mathbb{M}_{m,n}^W(\lambda) ]$ inductively from lower dimensional contributions and from the knowledge of the motive of the Brauer-Severi scheme $[ \wis{BS}_{m,n}^W(\lambda) ]$.
\begin{proposition} \label{induction} For all $n$ we have the following inductive description of the motives in the Donalson-Thomas series
\[
(\mathbb{L}^n-1) \frac{[\mathbb{M}^W_{m,n}(0)]-[\mathbb{M}^W_{m,n}(1)]}{[GL_n]} \]
is equal to
\[
[\wis{BS}^W_{m,n}(0)]-[\wis{BS}_{m,n}^W(1)] + \sum_{k=1}^{n-1} \frac{\mathbb{L}^{(m-1)k(n-k)}}{[GL_{n-k}]} ([\wis{BS}^W_{m,k}(0)]-[\wis{BS}^W_{m,k}(1)])([\mathbb{M}^W_{m,k}(0)]-[\mathbb{M}^W_{m,k}(1)]) \]
\end{proposition}
\begin{proof} Follows from Proposition~\ref{formula} and the fact that for all $\mu \not= 0$ we have that $[\mathbb{M}_{m,k}^W(\mu)]=[\mathbb{M}_{m,k}^W(1)]$ and $[\wis{BS}_{m,k}^W(\mu)]=[\wis{BS}_{m,k}^W(1)]$.
\end{proof}
\section{Deformations of affine $3$-space}
The commutative polynomial ring $\C[x,y,z]$ is the Jacobi algebra associated with the superpotential $W=XYZ-XZY$. For this reason we restrict in the rest of this paper to cases where the superpotential $W$ is a cubic necklace in three non-commuting variables $X,Y$ and $Z$, that is $m=3$ from now on. As even in this case the calculations become quickly unmanageable we restrict to $n \leq 2$, that is we only will compute the coefficients of $t$ and $t^2$ in $U_W(t)$. We will have to compute the motives of fibers of the Chern-Simons functional
\[
M_2(\C) \oplus M_2(\C) \oplus M_2(\C) \rTo^{Tr(W)} \C \]
so we want to express $Tr(W)$ as a function in the variables of the three generic $2 \times 2$ matrices
\[
X = \begin{bmatrix} n & p \\ q & r \end{bmatrix},~Y=\begin{bmatrix} s & t \\ u & v \end{bmatrix},~Z= \begin{bmatrix} w & x \\ y & z \end{bmatrix}. \]
We will call $\{ n,r,s,v,w,x \}$ (resp. $\{ p,t,x \}$ and $\{ q,u,y \}$) the diagonal- (resp. upper- and lower-) variables. We claim that
\[
Tr(W) = C + Q_q.q + Q_u.u + Q_y.y \]
where $C$ is a cubic in the diagonal variables and $Q_q,Q_u$ and $Q_y$ are bilinear in the diagonal and upper variables, that is, there are linear terms $L_{ab}$ in the diagonal variables such that
\[
\begin{cases}
Q_q = L_{qp}.p+L_{qt}.t+L_{qx}.x \\
Q_u = L_{up}.p+L_{ut}.t+L_{ux}.x \\
Q_y = L_{yp}.p+L_{yt}.t+L_{yx}.x
\end{cases}
\]
This follows from considering the two diagonal entries of a $2 \times 2$ matrix as the vertices of a quiver and the variables as arrows connecting these vertices as follows
\[
\xymatrix{\vtx{} \ar@(u,ul)_n \ar@(ul,dl)_s \ar@(dl,d)_w \ar@/^6ex/[rr]^q \ar@/^4ex/[rr]^u \ar@/^2ex/[rr]^y & & \vtx{} \ar@/^6ex/[ll]_p \ar@/^4ex/[ll]_t \ar@/^2ex/[ll]_x \ar@(u,ur)^r \ar@(ur,dr)^v \ar@(dr,d)^z} \]
and observing that only an oriented path of length $3$ starting and ending in the same vertex can contribute something non-zero to $Tr(W)$. Clearly these linear and cubic terms are fully determined by $W$. If we take
\[
W = \alpha X^3 + \beta Y^3 + \gamma Z^3 + \delta XYZ + \epsilon XZY \]
then we have $C = W(n,s,w)+W(r,v,z)$ and
\[
\begin{cases}
L_{qp} &= 3 \alpha(n+r) \\
L_{qt} &= \epsilon w + \delta z \\
L_{qx} &= \delta s + \epsilon v
\end{cases} \quad
\begin{cases}
L_{up} &= \delta w + \epsilon z \\
L_{ut} &= 3 \beta(s+v) \\
L_{ux} &= \epsilon n + \delta r \\
\end{cases} \quad
\begin{cases}
L_{yp} &= \epsilon s + \delta v \\
L_{yt} &= \delta n + \epsilon r \\
L_{yx} &= 3 \gamma(w+z) \\
\end{cases}
\]
By using the cellular decomposition of the Brauer-Severi scheme of $\mathbb{T}_{3,2}$ one can simplify the computations further by specializing certain variables. From \cite{ReinekeBS} we deduce that $\wis{BS}_2(\mathbb{T}_{3,2})$ has a cellular decomposition as $\mathbb{A}^{10} \sqcup \mathbb{A}^8 \sqcup \mathbb{A}^8$ where the three cells have representatives
\[
\begin{cases}
\wis{cell}_1~:~v = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad X = \begin{bmatrix} 0 & p \\ 1 & r \end{bmatrix}, \quad
Y = \begin{bmatrix} s & t \\ u & v \end{bmatrix}, \quad
Z = \begin{bmatrix} w & x \\ y & z \end{bmatrix} \\
\\
\wis{cell}_2~:~v = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad X = \begin{bmatrix} n & p \\ 0 & r \end{bmatrix}, \quad
Y = \begin{bmatrix} 0 & t \\ 1 & v \end{bmatrix}, \quad
Z = \begin{bmatrix} w & x \\ y & z \end{bmatrix} \\
\\
\wis{cell}_3~:~v = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad X = \begin{bmatrix} n & p \\ 0 & r \end{bmatrix}, \quad
Y = \begin{bmatrix} s & t \\ 0 & v \end{bmatrix}, \quad
Z = \begin{bmatrix} 0 & x \\ 1 & z \end{bmatrix}
\end{cases}
\]
It follows that $\wis{BS}_{3,2}^W(1)$ decomposes as $\mathbf{S_1} \sqcup \mathbf{S_2} \sqcup \mathbf{S_3}$ where the subschemes $\mathbf{S_i}$ of $\mathbb{A}^{11-i}$ have defining equations
\[
\begin{cases}
\mathbf{S_1}~:~(C + Q_u.u + Q_y.y + Q_q)|_{n=0} = 1 \\
\mathbf{S_2}~:~(C + Q_y.y + Q_u)|_{s=0} = 1 \\
\mathbf{S_3}~:~(C + Q_y)|_{w=0} = 1
\end{cases}
\]
Note that in using the cellular decomposition, we set a variable equal to $1$. So, in order to retain a homogeneous form we let $\mathbb{G}_m$ act on $n,s,w,r,v,z$ with weight one, on $q,u,y$ with weight two and on $x,t,p$ with weight zero. Thus, we need a slight extension of \cite[Thm. 1.3]{Davison} as to allow $\mathbb{G}_m$ to act with weight two on certain variables.
From now on we will assume that $W$ is as above with $\delta=1$ and $\epsilon \not= 0$. In this generality we can prove:
\begin{proposition} \label{S3} With assumptions as above
\[
[ \mathbf{S_3} ] = \begin{cases}
\mathbb{L}^7-\mathbb{L}^4+\mathbb{L}^3 [ W(n,s,0)+W(-\epsilon^{-1} n,- \epsilon s,0) = 1]_{\mathbb{A}^2} & \text{if $\gamma \not= 0$} \\
\mathbb{L}^7 - \mathbb{L}^5 + \mathbb{L}^3 [W(n,s,0) + W(-\epsilon^{-1} n,-\epsilon s,z) = 1]_{\mathbb{A}^3} & \text{if $\gamma=0$} \end{cases}
\]
\end{proposition}
\begin{proof}
$\mathbf{S_3}$ : The defining equation in $\mathbb{A}^8$ is equal to
\[
W(n,s,0)+W(r,v,z)+(\epsilon s +v)p+(n+\epsilon r)t+3 \gamma(z)x = 1 \]
If $\epsilon s + v \not= 0$ we can eliminate $p$ and get a contribution $\mathbb{L}^5(\mathbb{L}^2-\mathbb{L})$. If $v = - \epsilon s$ but $n + \epsilon r \not= 0$ we can eliminate $t$ and get a term $\mathbb{L}^4(\mathbb{L}^2-\mathbb{L})$. From now on we may assume that $v = -\epsilon s$ and $r= - \epsilon^{-1}n$.
\noindent
$\gamma \not = 0$ : Assume first that $z \not= 0$ then we can eliminate $x$ and get a contribution $\mathbb{L}^4(\mathbb{L}-1)$. If $z=0$ then we get a term
\[
\mathbb{L}^3 [ W(n,s,0)+W(-\epsilon^{-1} n,- \epsilon s,0) = 1]_{\mathbb{A}^2} \]
\noindent
$\gamma = 0$ : Then we have a remaining contribution
\[
\mathbb{L}^3 [W(n,s,0) + W(-\epsilon^{-1} n,-\epsilon s,z) = 1]_{\mathbb{A}^3} \]
Summing up all contributions gives the result.
\end{proof}
Calculating the motives of $\mathbf{S_2}$ and $\mathbf{S_1}$ in this generality quickly leads to a myriad of subcases to consider. For this reason we will defer the calculations in the cases of interest to the next sections. Specializing Proposition~\ref{induction} to the case of $n=2$ we get
\begin{proposition} \label{case2} For $n=2$ we have that
\[
(\mathbb{L}^2-1) \frac{[\mathbb{M}^W_{3,2}(0)]-[\mathbb{M}^W_{3,2}(1)]}{[GL_2]} \]
is equal to
\[
[\wis{BS}^W_{3,2}(0)]-[\wis{BS}^W_{3,2}(1)]+\frac{\mathbb{L}^2}{(\mathbb{L}-1)}([\mathbb{M}^W_{3,1}(0)]-[\mathbb{M}^W_{3,1}(1)])^2 \]
\end{proposition}
\begin{proof}
The result follows from Proposition~\ref{induction} and from the fact that $\mathbf{BS}_{3,1}^W(1)=\mathbb{M}_{3,1}^W(1)$ and $\mathbf{BS}_{3,1}^W(0)=\mathbb{M}_{3,1}^W(0)$.
\end{proof}
\section{Quantum affine three-space}
For $q \in \C^*$ consider the superpotential $W_q = XYZ-qXZY$, then the associated algebra $R_{W_q}$ is the quantum affine $3$-space
\[
R_{W_q} = \frac{\C \langle X,Y,Z \rangle}{(XY-qYX,ZX-qXZ,YZ-qZY)} \]
It is well-known that $R_{W_q}$ has finite dimensional simple representations of dimension $n$ if and only if $q$ is a primitive $n$-th root of unity. For other values of $q$ the only finite dimensional simples are $1$-dimensional and parametrized by $XYZ=0$ in $\mathbb{A}^3$. In this case we have
\[
\begin{cases}
[\mathbb{M}_{3,1}^{W_q}(1)]=[(q-1)XYZ=1]_{\mathbb{A}^3} = (\mathbb{L}-1)^2 \\
[\mathbb{M}_{3,1}^{W_q}(0)]=[(1-q)XYZ=0]_{\mathbb{A}^3} = 3 \mathbb{L}^2 - 3 \mathbb{L} + 1
\end{cases}
\]
That is, the coefficient of $t$ in $U_{W_q}(t)$ is equal to
\[
\mathbb{L}^{-1} \frac{[\mathbb{M}_{3,1}^{W_q}(0)- [\mathbb{M}_{3,1}^{W_q}(1)]}{[GL_1]} = \mathbb{L}^{-1} \frac{2 \mathbb{L}^2-\mathbb{L}}{\mathbb{L}-1} = \frac{2 \mathbb{L}-1}{\mathbb{L}-1} \]
In \cite[Thm. 3.1]{Cazz} it is shown that in case $q$ is not a root of unity, then
\[
U_{W_q}(t) = \wis{Exp}(\frac{2 \mathbb{L}-1}{\mathbb{L}-1} \frac{t}{1-t}) \]
and if $q$ is a primitive $n$-th root of unity then
\[
U_{W_q}(t) = \wis{Exp}(\frac{2\mathbb{L}-1}{\mathbb{L}-1} \frac{t}{1-t} + (\mathbb{L}-1) \frac{t^n}{1-t^n}) \]
In \cite[3.4.1]{Cazz} a rather complicated attempt is made to explain the term $\mathbb{L}-1$ in case $q$ is an $n$-th root of unity in terms of certain simple $n$-dimensional representations of $R_{W_q}$. Note that the geometry of finite dimensional representations of the algebra $R_{W_q}$ is studied extensively in \cite{Kevin2} and note that there are additional simple $n$-dimensional representations not taken into account in \cite[3.4.1]{Cazz}.
Perhaps a more conceptual explanation of the two terms in the exponential expression of $U_{W_q}(t)$ in case $q$ is an $n$-th root of unity is as follows. As $W_q$ admits a cut $W_q=X(YZ-qZY)$ it follows from \cite{Morrison} that for all dimensions $m$ we have
\[
[\mathbb{M}_{3,m}^{W_q}(0)]-[\mathbb{M}_{3,m}^{W_q}(1)] = \mathbb{L}^{m^2} [ \wis{rep}_m(\C_q[Y,Z])] \]
where $\C_q[Y,Z]=\C \langle Y,Z \rangle/(YZ-qZY)$ is the quantum plane. If $q$ is an $n$-th root of unity the only finite dimensional simple representations of $\C_q[Y,Z]$ are of dimension $1$ or $n$. The $1$-dimensional simples are parametrized by $YZ=0$ in $\mathbb{A}^2$ having as motive $2 \mathbb{L}-1$ and as all have $GL_1$ as stabilizer group, this explains the term $(2 \mathbb{L}-1)/(\mathbb{L}-1)$. The center of $\C_q[Y,Z]$ is equal to $\C[Y^n,Z^n]$ and the corresponding variety $\mathbb{A}^2=\wis{Max}(\C[Y^n,Z^n])$ parametrizes $n$-dimensional semi-simple representations.The $n$-dimensional simples correspond to the Zariski open set $\mathbb{A}^2 - (Y^nZ^n=0)$ which has as motive $(\mathbb{L}-1)^2$. Again, as all these have as $GL_2$-stabilizer subgroup $GL_1$, this explains the term
\[
\mathbb{L}-1 = \frac{(\mathbb{L}-1)^2}{[GL_1]} \]
As the superpotential allows a cut in this case we can use the full strength of \cite{Behrend}and can obtain $[\mathbb{M}^W_{3,2}(0)]$ from $[\mathbb{M}^W_{3,2}(1)]$ from the equality \[
\mathbb{L}^{12} = [\mathbb{M}^W_{3,2}(0)] + (\mathbb{L}-1)[\mathbb{M}^W_{3,2}(1)] \]
To illustrate the inductive procedure using Brauer-Severi motives we will consider the case $n=2$, that is $q=-1$ with superpotential $W=XYZ+XZY$. In this case we have from \cite[Thm. 3.1]{Cazz} that
\[
U_W(t) = \wis{Exp}(\frac{2 \mathbb{L}-1}{\mathbb{L}-1} \frac{t}{1-t}+(\mathbb{L}-1) \frac{t^2}{1-t^2} \]
The basic rules of the plethystic exponential on $\mathcal{M}_{\C}[[t]]$ are
\[
\wis{Exp}(\sum_{n \geq 1} [A_n]t^n) = \prod_{n \geq 1} (1-t^n)^{-[A_n]} \quad \text{where} \quad (1-t)^{-\mathbb{L}^m} = (1-\mathbb{L}^m t)^{-1} \]
and one has to extend all infinite products in $t$ and $\mathbb{L}^{-1}$. One starts by rewriting $U_W(t)$ as a product
\[
U_W(t) = \wis{Exp}(\frac{t}{1-t}) \wis{Exp}(\frac{\mathbb{L}}{\mathbb{L}-1}\frac{t}{1-t}) \wis{Exp}(\frac{\mathbb{L} t^2}{1-t^2}) \wis{Exp}(\frac{t^2}{1-t^2})^{-1} \]
where each of the four terms is an infinite product
\[
\wis{Exp}(\frac{t}{1-t}) = \prod_{m \geq 1}(1-t^m)^{-1}, \qquad \wis{Exp}(\frac{\mathbb{L}}{\mathbb{L}-1} \frac{t}{1-t}) = \prod_{m \geq 1} \prod_{j \geq 0} (1 - \mathbb{L}^{-j}t^m)^{-1} \]
\[
\wis{Exp}(\frac{\mathbb{L} t^2}{1-t^2}) = \prod_{m \geq 1} (1 - \mathbb{L}t^{2m})^{-1}, \qquad \wis{Exp}(\frac{t^2}{1-t^2})^{-1} = \prod_{m \geq 1} (1-t^{2m} \]
That is, we have to work out the infinite product
\[
\prod_{m \geq 1} ((1-t^{2m-1})^{-1} (1 - \mathbb{L} t^{2m})^{-1}) \prod_{m \geq 1} \prod_{j \geq 0} (1- \mathbb{L}^{-j} t^m)^{-1} \]
as a power series in $t$, at least up to quadratic terms. One obtains
\[
U_W(t) = 1 + \frac{2 \mathbb{L}-1}{\mathbb{L}-1} t + \frac{\mathbb{L}^4+3\mathbb{L}^3-2 \mathbb{L}^2 - 2 \mathbb{L}+1}{(\mathbb{L}^2-1)(\mathbb{L}-1)} t^2 + \hdots \]
That is, if $W=XYZ+XZY$ one must have the relation:
\[
[\mathbb{M}_{3,2}^W(0)]-[\mathbb{M}_{3,2}^W(1)] = \mathbb{L}^5(\mathbb{L}^4+3 \mathbb{L}^3-2 \mathbb{L}^2- 2\mathbb{L}+1) \]
\subsection{Dimensional reduction}
It follows from the dimensional reduction argument of \cite{Morrison} that
\[
[\mathbb{M}_{3,2}^W(0)] - [ \mathbb{M}_{3,2}^W(1) ] = \mathbb{L}^4 [ \wis{rep}_2~\C_{-1}[X,Y] ] \]
where $\C_{-1}[X,Y]$ is the quantum plane at $q=-1$, that is, $\C \langle X,Y \rangle / (XY+YX)$.
The matrix equation
\[
\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} e & f \\ g & h \end{bmatrix} + \begin{bmatrix} e & f \\ g & h \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \]
gives us the following system of equations
\[
\begin{cases}
2 ae + bg + fc = 0 \\
2 hd + bg + fc = 0 \\
f (a+d) + b (e+h) = 0 \\
c (h+e) + g (a+d) = 0
\end{cases}
\]
where the two first are equivalent to $ae=hd$ and $2ae+bg+fc=0$. Changing variables
\[
x=\frac{1}{2}(a+d), \quad y = \frac{1}{2}(a-d), \quad u = \frac{1}{2}(e+h), \quad v= \frac{1}{2}(e-h) \]
the equivalent system then becomes (in the variables $b,c,f,g,u,v,x,y$)
\[
\begin{cases}
xv+yu = 0 \\
xu+yv + bg+fc = 0 \\
fx+bu = 0 \\
cu+gx = 0
\end{cases}
\]
\begin{proposition} The motive of $R_2= \wis{rep}_2~\C_{-1}[x,y]$ is equal to
\[
[ R_2 ] = \mathbb{L}^5 + 3 \mathbb{L}^4 - 2 \mathbb{L}^3 - 2 \mathbb{L}^2 + \mathbb{L} \]
\end{proposition}
\begin{proof}
If $x \not= 0$ we obtain
\[
v=-\frac{yu}{x}, \quad f=-\frac{bu}{x}, \quad g=-\frac{cu}{x} \]
and substituting these in the remaining second equation we get the equation(s)
\[
u(y^2-x^2+2bc)=0 \quad \text{and} \quad x \not= 0 \]
If $u \not= 0$ then $y^2-x^2+2bc=0$. If in addition $b \not= 0$ then $c = \tfrac{x^2-y^2}{2b}$ and $y$ is free. As $x,u$ and $b$ are non-zero this gives a contribution $(\mathbb{L}-1)^3 \mathbb{L}$.
If $b=0$ then $c$ is free and $x^2-y^2=0$, so $y = \pm x$. This together with $x \not= 0 \not= u$ leads to a contribution of
$2 \mathbb{L}(\mathbb{L}-1)^2$. If $u = 0$ then $y,b$ and $c$ are free variables, and together with $x \not= 0$ this gives
$(\mathbb{L}-1) \mathbb{L}^3$.
\vskip 3mm
\noindent
Remains the case that $x = 0$. Then the system reduces to
\[
\begin{cases}
yu = 0 \\
yv+bg+fc = 0 \\
bu = 0 \\
cu = 0
\end{cases}
\]
If $u \not= 0$ then $y=0, b=0$ and $c=0$ leaving $c,g,v$ free. This gives
$(\mathbb{L}-1) \mathbb{L}^3$.
If $u = 0$ then the only remaining equation is $yv+bg+fc=0$. That is, we get the cone in $\mathbb{A}^6$ of the Grassmannian $Gr(2,4)$ in $\mathbb{P}^5$. As the motive of $Gr(2,4)$ is
\[
[Gr(2,4)] = (\mathbb{L}^2+1)(\mathbb{L}^2+\mathbb{L}+1) \]
we get a contribution of
\[
(\mathbb{L}-1)(\mathbb{L}^2+1)(\mathbb{L}^2+\mathbb{L}+1) + 1 \]
Summing up all contributions gives the desired result.
\end{proof}
\subsection{Brauer-Severi motives} In the three cells of the Brauer-Severi scheme of $\mathbb{T}_{3,2}$ of dimensions resp. $10,9$ and $8$ the superpotential $Tr(XYZ+XZY)$ induces the equations:
\[
\begin{cases}
\mathbf{S_1}~:~2rvz+puz+pvy+rty+psy+rux+puw+tz+vx+sx+tw=1 \\
\mathbf{S_2}~:~2rvz+pvy+rty+nty+pz+rx+nx+pw=1 \\
\mathbf{S_3}~:~2rvz+pv+rt+nt+ps=1
\end{cases}
\]
\begin{proposition} With notations as above, the Brauer-Severi scheme of $\mathbb{T}_{3,2}^W(1)$ has a decomposition
\[
\mathbf{BS}_{3,2}^W(1) = \mathbf{S_1} \sqcup \mathbf{S_2} \sqcup \mathbf{S_3} \]
where the schemes $\mathbf{S_i}$ have motives
\[
\begin{cases}
[ \mathbf{S_1} ] = \mathbb{L}^9-\mathbb{L}^6-2\mathbb{L}^5+3\mathbb{L}^4-\mathbb{L}^3 \\
[ \mathbf{S_2} ] = \mathbb{L}^8- 2 \mathbb{L}^5 + \mathbb{L}^4 \\
[ \mathbf{S_3} ] = \mathbb{L}^7 - 2 \mathbb{L}^4 + \mathbb{L}^3 \\
\end{cases}
\]
Therefore, the Brauer-Severi scheme has motive
\[
[ \mathbf{BS}_{3,2}^W(1) ] = \mathbb{L}^9+\mathbb{L}^8+\mathbb{L}^7-\mathbb{L}^6-4\mathbb{L}^5+2\mathbb{L}^4 \]
\end{proposition}
\begin{proof} $\mathbf{S_1}$ : From Proposition~\ref{S3} we obtain
\[
[ \mathbf{S_3} ] = \mathbb{L}^7 - \mathbb{L}^5 + \mathbb{L}^3[W(n,s,0)+W(-n,-s,z)=1]_{\mathbb{A}^3} \]
and as $W(n,s,0)+W(-n,-s,z)=2nsz$ we get $\mathbb{L}^7 - \mathbb{L}^5 + \mathbb{L}^3(\mathbb{L}-1)^2$.
\vskip 3mm
\noindent
$\mathbf{S_2}$ : The defining equation is
\[
2 rvz + y (pv + (r+n)t) + p(z+w) + x(r+n) = 1 \]
If $r+n \not= 0$ we can eliminate $x$ and have a contribution $\mathbb{L}^6 (\mathbb{L}^2-\mathbb{L})$. If $r+n=0$ we get the equation
\[
2 rvz + p (yv+z+w) = 1 \]
If $yv+z+w \not= 0$ we can eliminate $p$ and get a term $\mathbb{L}^3(\mathbb{L}^4-\mathbb{L}^3)$. If $r+n=0$ and $yv+z+w=0$ we have $2rvz = 1$ so a term $\mathbb{L}^4(\mathbb{L}-1)^2$. Summing up gives us
\[
[ \mathbf{S}_2 ] = \mathbb{L}^4(\mathbb{L}-1)(\mathbb{L}^3+\mathbb{L}^2+\mathbb{L}-1) = \mathbb{L}^8- 2 \mathbb{L}^5 + \mathbb{L}^4 \]
\vskip 3mm
\noindent
$\mathbf{S_1}$ : The defining equation is
\[
2 rvz + p(u(z+w)+y(v+s))+t(z+w+ry)+x(v+s+ru) = 1 \]
If $v+s+ru \not= 0$ we can eliminate $x$ and get $\mathbb{L}^5(\mathbb{L}^4-\mathbb{L}^3)$. If $v+s+ru=0$ and $z+w+ry \not= 0$ we can eliminate $t$ and have a term $\mathbb{L}^4(\mathbb{L}^4-\mathbb{L}^3)$. If $v+s+ru=0$ and $z+w+ry=0$, the equation becomes (in $\mathbb{A}^8$, with $t,x$ free variables)
\[
2r(vz-puy) = 1 \]
giving a term $\mathbb{L}^2(\mathbb{L}^5-[ vz=puy ])$. To compute $[ vz=puy ]_{\mathbb{A}^5}$ assume first that $v \not= 0$, then this gives $\mathbb{L}^3(\mathbb{L}-1)$ and if $v=0$ we get $\mathbb{L}(3 \mathbb{L}^2-3 \mathbb{L} + 1)$. That is, $[vz=puy]_{\mathbb{A}^5}=\mathbb{L}^4+2 \mathbb{L}^3-3 \mathbb{L}^2+\mathbb{L}$. In total this gives us
\[
[ \mathbf{S}_1 ] = \mathbb{L}^3(\mathbb{L}-1)(\mathbb{L}^5+\mathbb{L}^4+\mathbb{L}^3-2 \mathbb{L}+1) = \mathbb{L}^9-\mathbb{L}^6-2\mathbb{L}^5+3\mathbb{L}^4-\mathbb{L}^3 \]
finishing the proof.
\end{proof}
\begin{proposition} From the Brauer-Severi motive we obtain
\[
\begin{cases}
[ \mathbb{M}_{3,2}^W(1) ] &= \mathbb{L}^{11}-\mathbb{L}^8-3\mathbb{L}^7+2\mathbb{L}^6+2\mathbb{L}^5-\mathbb{L}^4 \\
[ \mathbb{M}_{3,2}^W(0) ] &= \mathbb{L}^{11} + \mathbb{L}^9 + 2 \mathbb{L}^8 - 5\mathbb{L}^7 + 3 \mathbb{L}^5 - \mathbb{L}^4
\end{cases}
\]
As a consequence we have,
\[
[\mathbb{M}_{3,2}^W(0)]-[\mathbb{M}_{3,2}^W(1) ]=\mathbb{L}^4(\mathbb{L}^5+3 \mathbb{L}^4-2\mathbb{L}^3-2\mathbb{L}^2+\mathbb{L}) \]
\end{proposition}
\begin{proof} We have already seen that $\mathbb{M}_{3,1}^W(1) = \{ (x,y,z)~|~2xyz=1 \}$ and $\mathbb{M}_{3,1}^W(0) = \{ (x,y,z)~|~xyz=0 \}$ whence
\[
[\mathbb{M}_{3,1}^W(1)] = (\mathbb{L}-1)^2 \quad \text{and} \quad [ \mathbb{M}_{3,1}^W(0)]=3 \mathbb{L}^2-3\mathbb{L}+1 \]
Plugging this and the obtained Brauer-Severi motive into Proposition~\ref{induction} gives $[\mathbb{M}_{3,2}^W(1)]$. From this $[\mathbb{M}_{3,2}^W(0)]$ follows from the equation $\mathbb{L}^{12} = (\mathbb{L}-1)[\mathbb{M}_{3,2}^W(1)] + [ \mathbb{M}_{3,2}^W(0)]$.
\end{proof}
\section{The homogenized Weyl algebra}
If we consider the superpotential $W=XYZ-XZY- \frac{1}{3}X^3$ then the associated algebra $R_W$ is the homogenized Weyl algebra
\[
R_W = \frac{\C \langle X,Y,Z \rangle}{(XZ-ZX,XY-YX,YZ-ZY-X^2)} \]
In this case we have $\mathbb{M}_{3,1}^W(1) = \{ x^3=-3 \}$ and $\mathbb{M}_{3,1}^W(0) = \{ x^3 = 0 \}$, whence
\[
[\mathbb{M}_{3,1}^W(1)] = \mathbb{L}^2[\mu_3], \quad \text{and} \quad [\mathbb{M}_{3,1}^W(0)] = \mathbb{L}^2 \]
where, as in \cite[3.1.3]{Cazz} we denote by $[\mu_3]$ the equivariant motivic class of $\{ x^3=1 \} \subset \mathbb{A}^1$ carrying the canonical action of $\mu_3$. Therefore, the coefficient of $t$ in $U_W(t)$ is equal to
\[
\mathbb{L}^{-1} \frac{[\mathbb{M}_{3,1}^W(0)] - [\mathbb{M}_{3,1}^W(0)]}{[GL_1]} = \frac{\mathbb{L}(1-[\mu_3])}{\mathbb{L}-1} \]
As all finite dimensional simple representations of $R_W$ are of dimension one, this leads to the conjectural expression \cite[Conjecture 3.3]{Cazz}
\[
U_W(t) \overset{?}{=} \wis{Exp}(\frac{\mathbb{L}(1-[\mu_3])}{\mathbb{L}-1} \frac{t}{1-t}) \]
Balazs Szendr\"oi kindly provided the calculation of the first two terms of this series. Denote with $\tilde{\mathbf{M}} = 1 - [ \mu_3]$, then
\[
U_W(t) \overset{?}{=} 1 + \frac{\mathbb{L} \tilde{\mathbf{M}}}{\mathbb{L}-1}t + \frac{\mathbb{L}^2 \tilde{\mathbf{M}}^2+ \mathbb{L}(\mathbb{L}^2-1) \tilde{\mathbf{M}} + \mathbb{L}^2(\mathbb{L}-1) \sigma_2(\tilde{\mathbf{M}})}{(\mathbb{L}^2-1)(\mathbb{L}-1)} t^2 + \hdots \]
As was pointed out by B. Pym and B. Davison it follows from \cite[Defn 4.4 and Prop 4.5 (4)]{Davison} that $\sigma_2(\tilde{\mathbf{M}}) = \mathbb{L}$, so the second term is equal to
\[
\frac{\mathbb{L}^3(\mathbb{L}-1) + \tilde{\mathbf{M}} \mathbb{L}(\mathbb{L}^2-1) + \tilde{\mathbf{M}}^2 \mathbb{L}^2}{(\mathbb{L}^2-1)(\mathbb{L}-1)} \]
We will now compute the this second term using Brauer-Severi motives.
\vskip 3mm
Recall that $\wis{BS}_{3,2}^W(i)$, for $i=0,1$, decomposes as $\mathbf{S_1} \sqcup \mathbf{S_2} \sqcup \mathbf{S_3}$ where the subschemes $\mathbf{S_i}$ of $\mathbb{A}^{11-i}$ have defining equations
\[
\begin{cases}
\mathbf{S_1}~:~-\frac{1}{3}r^3+((w-z)p+rx)u+((v-s)p-rt)y-rp+(z-w)t+(s-v)x = \delta_{i1} \\
\mathbf{S_2}~:~-\frac{1}{3}n^3-\frac{1}{3}r^3+(vp+(n-r)t)y + (w-z)p+(r-n)x = \delta_{i1} \\
\mathbf{S_3}~:~-\frac{1}{3}n^3-\frac{1}{3}r^3+(v-s)p+(n-r)t = \delta_{i1}
\end{cases}
\]
If we let the generator of $\mu_3$ act with weight one on the variables $n,s,w,r,v,z$, with weight two on $x,t,p$ and with weight zero on $q,u,y$ we see that the schemes $S_j$ for $i=1$ are indeed $\mu_3$-varieties. We will now compute their equivariant motives:
\begin{proposition} With notations as above, the Brauer-Severi scheme of $\mathbb{T}_{3,2}^W(1)$ has a decomposition
\[
\mathbf{BS}_{3,2}^W(1) = \mathbf{S_1} \sqcup \mathbf{S_2} \sqcup \mathbf{S_3} \]
where the schemes $\mathbf{S_i}$ have equivariant motives
\[
\begin{cases}
[ \mathbf{S_1} ] = \mathbb{L}^9 - \mathbb{L}^6 \\
[ \mathbf{S_2} ] = \mathbb{L}^8 + ([\mu_3] -1) \mathbb{L}^6 = \mathbb{L}^8- \tilde{\mathbf{M}} \mathbb{L}^6 \\
[ \mathbf{S_3} ] = \mathbb{L}^7 + ([\mu_3]-1) \mathbb{L}^5 = \mathbb{L}^7 - \tilde{\mathbf{M}} \mathbb{L}^5 \\
\end{cases}
\]
Therefore, the Brauer-Severi scheme $\mathbf{BS}^W_{3,2}(1)$ has equivariant motive
\[
[ \mathbf{BS}_{3,2}^W(1) ] = \mathbb{L}^9 + \mathbb{L}^8 + \mathbb{L}^7 + ([\mu_3]-2) \mathbb{L}^6 + ([\mu_3]-1) \mathbb{L}^5 \]
\end{proposition}
\begin{proof} $\mathbf{S_3}$ : If $v-s \not= 0$ we can eliminate $p$ and obtain a contribution $\mathbb{L}^5(\mathbb{L}^2-\mathbb{L})$. If $v=s$ and $n-r \not= 0$ we can eliminate $t$ and obtain a term $\mathbb{L}^4(\mathbb{L}^2-\mathbb{L})$. Finally, if $v=s$ and $n=r$ we have the identity $-\frac{2}{3}n^3=1$ and a contribution $\mathbb{L}^5 [ \mu_3 ]$.
\vskip 3mm
\noindent
$\mathbf{S_2}$ : If $r-n \not= 0$ we can eliminate $x$ and get a term $\mathbb{L}^6(\mathbb{L}^2-\mathbb{L})$. If $r-n=0$ we get the equation in $\mathbb{A}^8$
\[
-\frac{2}{3}n^3+p(vy+w-z) = 1 \]
If $vy+w-z \not= 0$ we can eliminate $p$ and get a contribution $\mathbb{L}^3(\mathbb{L}^4-\mathbb{L}^3)$. Finally, if $vy+w-z=0$ we get the equation $-\frac{2}{3}n^3=1$ and hence a term $\mathbb{L}^3. \mathbb{L}^3[\mu_3]$.
\vskip 3mm
\noindent
$\mathbf{S_1}$ : If $(w-z)p+rx \not= 0$ then we can eliminate $u$ and get a contribution
\[
\mathbb{L}^4(\mathbb{L}^5-[(w-z)p+rx=0]_{\mathbb{A}^5}) = \mathbb{L}^6(\mathbb{L}-1)(\mathbb{L}^2-1) \]
If $(w-z)p+rx=0$ but $(v-s)p-rt \not= 0$ we can eliminate $y$ and get a term
\[
\mathbb{L}.[(w-z)p+rx=0,(v-s)p-rt \not= 0]_{\mathbb{A}^8} \]
To compute the equivariant motive in $\mathbb{A}^8$ assume first that $r \not= 0$ then we can eliminate $x$ from the equation and obtain
\[
\mathbb{L}^2[r \not= 0,(v-s)p-rt \not= 0]_{\mathbb{A}^5}=\mathbb{L}^2(\mathbb{L}^4(\mathbb{L}-1) - [r \not= 0,(v-s)p-rt=0]_{\mathbb{A}^5}) = \mathbb{L}^5(\mathbb{L}-1)^2 \]
If $r=0$ we have to compute $[(w-z)p=0,(v-s)p\not= 0]_{\mathbb{A}^7} = \mathbb{L}^2(\mathbb{L}-1)(\mathbb{L}^2-\mathbb{L})\mathbb{L} = \mathbb{L}^4(\mathbb{L}-1)^2$. So, in total this case gives a contribution
\[
\mathbb{L}.[(w-z)p+rx=0,(v-s)p-rt \not= 0]_{\mathbb{A}^8} = \mathbb{L}^5(\mathbb{L}-1)(\mathbb{L}^2-1) \]
If $(w-z)p+rx=0$, $(v-s)p-rt=0$ and $r \not= 0$ we can eliminate $x = \tfrac{z-w}{r}p$ and $t=\tfrac{v-s}{r}p$ and substituting in the defining equation of $\mathbf{S_1}$ we get
\[
-\frac{1}{3}r^3-rp = 1 \]
so we can eliminate $p$ and obtain a contribution $\mathbb{L}^6(\mathbb{L}-1)$.
Finally, if $(w-z)p+rx=0$, $(v-s)p-rt=0$ and $r = 0$ we get the system of equations
\[
\begin{cases} (w-z)p = 0 \\ (v-s)p = 0 \\ (z-w)t+(s-v)x = 1 \end{cases} \]
If $p \not= 0$ we must have $w-z=0$ and $v-s=0$ which is impossible, so we must have $p=0$ and the remaining equation is $(z-w)t+(s-v)x=1$ giving a contribution $\mathbb{L}^5(\mathbb{L}^2-1)$. Summing up these contributions gives the claimed motive.
\end{proof}
\begin{proposition} With notations as above, the Brauer-Severi scheme of $\mathbb{T}_{3,2}^W(0)$ has a decomposition
\[
\mathbf{BS}_{3,2}^W(0) = \mathbf{S_1} \sqcup \mathbf{S_2} \sqcup \mathbf{S_3} \]
where the schemes $\mathbf{S_i}$ have (equivariant) motives
\[
\begin{cases}
[ \mathbf{S_1} ] = \mathbb{L}^9 + \mathbb{L}^7 - \mathbb{L}^6 \\
[ \mathbf{S_2} ] = \mathbb{L}^8 \\
[ \mathbf{S_3} ] = \mathbb{L}^7 \\
\end{cases}
\]
Therefore, the Brauer-Severi scheme $\mathbf{BS}^W_{3,2}(0)$ has (equivariant) motive
\[
[ \mathbf{BS}_{3,2}^W(0) ] = \mathbb{L}^9 + \mathbb{L}^8 + 2 \mathbb{L}^7 - \mathbb{L}^6 \]
\end{proposition}
\begin{proof} $\mathbf{S_3}$ : If $v-s \not= 0$ we can eliminate $p$ and obtain a contribution $\mathbb{L}^5(\mathbb{L}^2-\mathbb{L})$. If $v=s$ and $n-r \not= 0$ we can eliminate $t$ and obtain a term $\mathbb{L}^4(\mathbb{L}^2-\mathbb{L})$. Finally, if $v=s$ and $n=r$ we have the identity $n^3=0$ and a contribution $\mathbb{L}^5$.
\vskip 3mm
\noindent
$\mathbf{S_2}$ : If $r-n \not= 0$ we can eliminate $x$ and get a term $\mathbb{L}^6(\mathbb{L}^2-\mathbb{L})$. If $r-n=0$ we get the equation in $\mathbb{A}^8$
\[
-\frac{2}{3}n^3+p(vy+w-z) = 1 \]
If $vy+w-z \not= 0$ we can eliminate $p$ and get a contribution $\mathbb{L}^3(\mathbb{L}^4-\mathbb{L}^3)$. Finally, if $vy+w-z=0$ we get the equation $n^3=0$ and hence a term $\mathbb{L}^6$.
\vskip 3mm
\noindent
$\mathbf{S_1}$ : If $(w-z)p+rx \not= 0$ we can eliminate $u$ and obtain a term
\[
\mathbb{L}^4(\mathbb{L}^5-[(w-z)p+rx=0]_{\mathbb{A}^5} )= \mathbb{L}^6(\mathbb{L}-1)(\mathbb{L}^2-1) \]
If $(w-z)p+rx=0$ but $(v-s)p-rt \not= 0$ then we can eliminate $y$ and obtain a contribution
\[
\mathbb{L}[(w-z)p+rx=0,(v-s)p-rt \not= 0]_{\mathbb{A}^8} = \mathbb{L}^5(\mathbb{L}-1)(\mathbb{L}^2-1) \]
Now, assume that $(w-z)p+rx=0$ and $(v-s)p-rt=0$. If $r \not= 0$ then we can eliminate $p,t$ as before and substituting them in the defining equation of $\mathbf{S_1}$ we get
\[
-\frac{1}{3}r^3-rp=0 \]
and we can eliminate $p$ giving a contribution $\mathbb{L}^6(\mathbb{L}-1)$. Finally, if
$(w-z)p+rx=0$ and $(v-s)p-rt=0$ and $r=0$ we have the system of equations
\[
\begin{cases} (w-z)p = 0 \\ (v-s)p = 0 \\ (z-w)t+(s-v)x = 0 \end{cases} \]
If $p \not= 0$ we get $w-z=0$ and $v-s=0$ giving a contribution $\mathbb{L}^6(\mathbb{L}-1)$. If $p=0$ the only remaining equation is $(z-w)t+(s-v)x=0$ which gives a contribution $\mathbb{L}^5(\mathbb{L}^2+\mathbb{L}-1)$.
Summing up all terms gives the claimed motive.
\end{proof}
Now, we have all the information to compute the second term of the motivic Donaldson-Thomas series. We have
\[
\begin{cases}
[\mathbf{BS}_{3,2}^W(0)]-[\mathbf{BS}_{3,2}^W(1)] = \mathbb{L}^7+ \tilde{\mathbf{M}} \mathbb{L}^6 + \tilde{\mathbf{M}} \mathbb{L}^5 \\
[ \mathbb{M}^W_{3,1}(0)]-[\mathbb{M}^W_{3,1}(1)] = \tilde{\mathbf{M}} \mathbb{L}^2
\end{cases}
\]
By Proposition~\ref{case2} this implies that
\[
(\mathbb{L}^2-1) \frac{[ \mathbb{M}^W_{3,2}(0) ] - [ \mathbb{M}^W_{3,2}(1) ]}{[ GL_2 ]} = \mathbb{L}^7 + \tilde{\mathbf{M}} \mathbb{L}^6 + \tilde{\mathbf{M}} \mathbb{L}^5 + \tilde{\mathbf{M}}^2 \frac{\mathbb{L}^6}{(\mathbb{L}-1)} \]
Therefore the virtual motive is equal to
\[
\mathbb{L}^{-4} \frac{[ \mathbb{M}^W_{3,2}(0) ] - [ \mathbb{M}^W_{3,2}(1) ]}{[ GL_2 ]} = \frac{\mathbb{L}^3(\mathbb{L}-1) + \tilde{\mathbf{M}} \mathbb{L}(\mathbb{L}^2-1) + \tilde{\mathbf{M}}^2 \mathbb{L}^2}{(\mathbb{L}^2-1)(\mathbb{L}-1)} \]
which coincides with the conjectured term in \cite[Conjecture 3.3]{Cazz}. | {"config": "arxiv", "file": "1604.08556.tex"} |
\begin{document}
\author{D.V. Osipov \footnote{The author was supported by an LMS grant for young
Russian mathematicians at the University of Manchester, and also by the Russian Foundation
for Basic Research, grant no. 05-01-00455.}}
\title{$n$-dimensional local fields and adeles on $n$-dimensional schemes.}
\date{}
\maketitle
\section{Introduction}
The notion of $n$-dimensional local field has appeared in the works of
A.~N.~Parshin and K.~Kato in the middle of 70's.
These fields generalize the usual local fields (which is $1$-dimensional
in this sense) and help us to see on higherdimensional
algebraic schemes from the local point of view.
With every flag
$$ X_0 \subset X_1 \ldots \subset
X_{n-1}$$
$$
{\rm dim \,} X_i = i $$
of irreducible subvarieties on a scheme $X$
($
{\rm dim \,} X =n
$)
one can canonically associate
a ring $K_{(X_0, \ldots, X_{n-1})}$.
In case everything is
regularly embedded, the ring is
an $n$-dimensional local field.
Originally, higherdimensional local fields
were used for the development of generalisation
of class field theory to the schemes
of arbitrary dimension (works of A.~N.~Parshin,
K.~Kato, S.~V.~Vostokov and others), \cite{P}, \cite{KS}.
But many problems of algebraic varieties
can be reformulated in terms
of higherdimensional local fields
and higher adelic theory.
For a scheme $X$ adelic object
$$ \da_X =
\prod\nolimits' K_{(X_0, \ldots, X_{n-1})}
$$
where the product is taken over all the flags
with respect to certain restrictions on components of adeles.
A.N. Parshin defined in~\cite{P1} adeles on algebraic surfaces, which generalize usual adeles on curves. A.A. Beilinson introduced a simplicial approach to adeles and
generalized to arbitrary dimensional Noetherian schemes in~\cite{B}.
A.~N.~Parshin, A.~A.Beilinson, A.~Huber,
A.~Yekutiely, V.~G.~Lomadze and others described
connections of higher adelic groups with cohomology of
coherent sheaves (\cite{P1}, \cite{B}, \cite{H}, \cite{Y}, \cite{F}, \cite{FP}),
intersection theory (\cite{P2}, \cite{Lom}, \cite{Os0}, \cite{FP}),
Chern classes (\cite{P2}, \cite{HY}, \cite{FP}), theory of residues (\cite{P1}, \cite{Y},
\cite{B}, \cite{L}, \cite{FP}), torus actions (\cite{GP}).
This paper is a survey of basic notions of higherdimensional local fields
and adeles on higherdimensional schemes.
The paper is organized as follows.
In section \ref{sect2}
we give a general definition of $n$-dimensional local field
and formulate classification theorems of $n$-dimensional local fields.
We describe how $n$-dimensional local fields appear from algebraic varieties
and arithmetical schemes.
In section \ref{adel}
we define higher dimensional adeles and adelic complexes.
Starting from an example of adelic complexes on algebraic curves, we give
a general simplicial definition for arbitrary Noetherian schemes, which is due
to A.A. Beilinson. We formulate the theorems about adelic resolutions of quasicoherent sheaves
on Noetherian schemes. We applicate this general constructions to algebraic sufaces
to obtain adelic complexes on algebraic surfaces, which were introduced by A.N.~Parshin.
In section \ref{sect4} we describe restricted adelic complexes.
In constrast to the adelic complexes from section~\ref{adel}, restricted adelic complexes are connected with a single flag of subvarieties. A.N. Parshin introduced restricted adeles
for algebraic surfaces in~\cite{P5}, \cite{P4}. The author introduced restricted adelic
complexes for arbitrary schemes in \cite{Os}. We give also the reconstruction theorem on
restricted adelic complexes.
In the last section we briefly describe reciprocity laws on algebraic surfaces.
The author is very grateful to A.N.~Parshin for a lot of discussions on higherdimensional local fields and adeles. The author is also grateful to M.~Taylor for interesting discussions and his hospitality during the visit to Manchester sponsored by an LMS grant.
\section{$n$-dimensional local fields} \label{sect2}
\subsection{Classification theorems.}
We fix a perfect field $k$.
We say that $K$ is a local field of dimension $1$ with the residue field $k$,
if $K$ is a fraction field of complete discrete valution ring $\oo_K$
with the residue field $\bar K =k$.
We denote by $\nu_K$ the discrete valuation of $K$
and by $m_K$ the maximal ideal of ring $\oo_K$.
Such a field has the following structure
$$ K \supset \oo_K \to \bar K = k$$.
As examples of such fields we have the field of power series
$$ K = k((t)) \mbox{,} \qquad \oo_K = k [[t]] \mbox{,} \qquad \bar K = k $$
and the field of p-adic numbers
$$ K = \dbq_p \mbox{,} \qquad \oo_K = \dbz_p \mbox{,} \qquad k = \dbf_p \mbox{.}$$
Moreover, we have only the following possibilities,
see~\cite[ch. II]{Ser}:
\begin{Th} \label{cl1}
Let $K$ be a local field of dimension $1$ with the residue field $k$ then
\begin{enumerate}
\item \label{c1}
$K= k((t))$ is the power series field if $char K = char k$;
\item \label{c2}
\begin{enumerate}
\item
$K = \Frac (W(k))$ where $\oo_K = W(k)$ is the Witt ring
(for example, $K = \dbq_p$),
\item or $K $ is a finite totally
ramified extension of the field $\Frac (W(k))$
\end{enumerate}
if $char K = 0$, $char k =p$.
\end{enumerate}
\end{Th}
Now we give the following inductive definition.
\begin{defin}
We say that a field $K$ is a local field of dimension $n$
with the last residue filed $k$ if
\begin{enumerate}
\item $n = 0$ and $K =k$
\item $n \ge 1$ and $K$ is the fraction field of a complete discrete valuation
ring $\oo_K$ whose residue field $\bar K$ is a local field of
dimension $n-1$ with the last residue field $k$
\end{enumerate}
\end{defin}
A local field of dimension $n \ge 1$ has the following inductive structure:
$$ K = K^{(0)} \supset \oo_K \to \bar{K} = K^{(1)} \supset \oo_{\bar{K}} \to
\bar{K}^{(1)}= K^{(2)} \supset \oo_{K^{(2)}} \to \ldots \bar{K}^{(n)}=k \mbox{,} $$
where for a discrete valuation field $F$ $\oo_F$ the ring
of integers in $F$ and $\bar{F}$ the residue field.
The maximal ideals in $\oo_{K^{(i)}}$ we denote by $m_{K^{(i)}}$.
Every field $K^{(i)}$ is a local field of dimension $n-i$
with the last residue field $k$.
\begin{defin}
A collection of elements $t_1, \ldots, t_n \in \oo_K$
is called a system of local parameters, if for all $i = 1, \ldots, n$
$$
t_i \; \mod m_{K^{(0)}} \in \oo_{K^{(1)}}, \; \ldots, \;
t_i \; \mod m_{K^{(i-2)}}
\in \oo_{K^{(i-1)}}
$$
and the element
$t_i \: \mod m_{K^{(i-2)}}
\in \oo_{K^{(i-1)}}$
is a generator of $m_{K^{(i-1)}}$.
\end{defin}
An example of an $n$-dimensional local field is the field $K = k ((t_n)) \ldots ((t_1))$.
For this field we have
$$
K^{(i)}=k((t_n)) \ldots ((t_{i+1}))
\qquad
\mbox{,}
\qquad
\oo_{K^{(i)}}= k ((t_n)) \ldots ((t_{i+2}))[[t_{i+1}]] \mbox{.}
$$
We consider $n =2$, then we have
$$K \supset \oo_K \to \bar{K} \supset \oo_{\bar{K}} \to k \mbox{.}$$
We can construct the following examples of $2$-dimensional local fields with the last residue field $k$. These examples depend on characteristic of $2$-dimensional field $K$
and characteristics of its residue fields.
\begin{itemize}
\item $K = k ((t_2))((t_1))$. And $char K = char \bar{K} = char k$.
\item $K = F ((t))$, where $F$ is a local field of dimension $1$ with the residue field $k$
such that $char(F) \ne char(k)$, for example $ F = \dbq_p$.
\item $K = F \{\{t\}\}$, where $F$ is a local field of dimension $1$ with the residue field $k$.
\end{itemize}
The field $F \{\{t\}\}$ has the following description
$$
a \in F\{\{t\}\} \quad : \quad a= \sum_{i= - \infty}^{i = + \infty} a_i t^i
\mbox{,} \qquad a_i \in F \mbox{,}
$$
$$
\mbox{where} \qquad \mathop{\lim}\limits_{i \to -\infty} \nu_F(a_i) = 0 \qquad \mbox{and}
\qquad \nu_F(a_i) > c_a \qquad \mbox{for some integer} \qquad c_a \mbox{.} $$
We put the discrete valuation $\nu_{F\{\{t\}\}} (a) = \min \nu_{F} (a_i)$.
Then the ring
$ \oo_{F\{\{t\}\}}$ consits of elements $a$ such that all $a_i \in \oo_F $, and
the maximal ideal
$ m_{F\{\{t\}\}}$ consists of elements $a$ such that all $ a_i \in m_F $.
Therefore
$$\overline{F\{\{t\}\}} = \bar{F}((t)) \mbox{.}$$
We remark that for $F = k((u))$
the field $F\{\{t\}\}$ is isomorphic to the field $k ((t))((u))$.
There is the following classification theorem, see \cite{FP}, \cite{P3}, \cite{Zh}, \cite{Zh1}.
\begin{Th} \label{cl2}
Let $K$ be an $n$-dimensional local field with the finite last residue field.
Then
\begin{enumerate}
\item
\label{ca1}
$K$ is isomorphic to $\dbf_q((t_n)) \ldots ((t_1))$ if $char (K) = p$;
\item
\label{ca2}
$K$ is isomorphic to $F ((t_{n-1})) \ldots ((t_{1}))$,
$F$ is a $1$-dimensional local field, if $char (K^{(n-1)}) = 0$.
\item
\label{ca3}
$K$ is a finite extension of a field
$$
F \{ \{ t_n \} \} \ldots \{ \{ t_{m+2} \} \} ((t_m)) \ldots ((t_1)) \qquad \qquad \qquad (*)
$$
and there is a finite extension of $K$ which is of the form $(*)$, but possibly
with different $F$ and $t_i$, if $char (K^{(m)}) = 0$, $char (K^{(m+1)}) = p$.
\end{enumerate}
\end{Th}
We remark that if $\pi$ is a local parameter for a $1$-dimensional local field $F$
then $t_1, \ldots, t_m, \pi, t_{m+2}, \ldots, t_n$
are local parameters for a field
$$ F \{ \{ t_n \} \} \ldots \{ \{ t_{m+2} \} \} ((t_m)) \ldots ((t_1)) \mbox{.}$$
\subsection{Local fields which come from algebraic geometry.}
We consider an algebraic curve $C$ over the field $k$.
We fix a smooth point $p \in C$. We consider
$$
K_p = \Frac ( \hat{\oo}_p)
\mbox{,}$$
where
$$ \hat{\oo}_p = \mathop{\mathop{\lim}_{\leftarrow}}_n \oo_p/ m_p^n
$$
is a completion of the local ring $\oo_p$ of the point $p$ on the curve $C$.
Then
\begin{equation} \label{f1}
K_p = k(p) ((t)) \mbox{,}
\end{equation}
where $k(p) = \oo_p/ m_p $ is the residue field of the point $p$ on the curve $C$,
which is the finite extension of the field $k$, and $t$ is a local parameter of the point
$p$ on the curve $C$.
We see that the field $K_p$ corresponds to the case~\ref{c1}
of the classification theorem~\ref{cl1}.
\vspace{0.5cm}
Now we consider a field of algebraic numbers $K$,
which is a finite extension of the field $\dbq_p$. We consider the ring of integers
$A$ of the field $K$. Let $X = \Spec A$ be a $1$-dimensional scheme.
We fix a closed point $p \in X$, which corresponds to a maximal ideal in $A$.
Then the completion of the field $K$ at the point $p$ is
\begin{equation} \label{f2}
K_p = \Frac \,( \, \mathop{\mathop{\lim}_{\leftarrow}}_n A_p/ m_p^n \,)
\end{equation}
We see that $K_p$ is a $1$-dimensional local field with the residue field $\dbf_q$,
and the field $K_p$ corresponds to the case~\ref{c2}
of the classification theorem~\ref{cl1}.
\vspace{1cm}
We give now the definitions for the general situation.
Let $R$ be a ring, $p$ a prime ideal of $R$, $M$ an $R$-module.
Let $S_p = R \setminus p$. We write $S_p^{-1} M$ for the localisation of $M$ at $S_p$.
For an ideal $a$ of $R$ set
$C_a M =
\mathop{\mathop{\lim}\limits_{\leftarrow}}\limits_{n \in \sdbn} M / a^n M
$.
Let $X$ be a Noetherian scheme of dimension $n$.
Let $\delta = (p_0, \ldots, p_n)$ be a chain of points of $X$ (i.e. the chain of integral irreducible subschemes if we consider the closures of points $p_i$) such that $p_{i+1} \in \overline{\{p_i\}}$ for any $i$ ($ \overline{\{p_i\}} $ is the closure of the point $p_i$ in $X$).
We suppose that for all $i$ $\dim p_i = i$.
We restrict $\delta$ to some affine open neighbourhood $\Spec B$ of the closed point
$p_n$ on $X$. Then $\delta$ determines a chain of prime divisors of the ring $B$,
which we denote by the same letters $(p_0, \ldots, p_n)$.
We define a ring
\begin{defin}
\begin{equation} \label{f}
K_{\delta} \eqdef C_{p_0}S_{p_0}^{-1} \ldots C_{p_n} S_{p_n}^{-1} B
\end{equation}
\end{defin}
This definition of $K_{\delta}$ does not depend on the choice
of affine neighbourhood $\Spec B$ of the point $p_n$ on the scheme $X$,
see~\cite[prop.3.1.3., prop. 3.2.1.]{H}.
We remark that the ring $C_{p_n} S_{p_n}^{-1} B$ from formula \ref{f}
coincides with the completion $\hat{\oo}_{p_n, X}$ of the local ring of the point $p_n$
on the scheme $X$.
\vspace{1cm}
We consider now examples of formula~(\ref{f})
for small $n$.
\begin{ex} \em
If $X$ is an irredusible $1$-dimensional scheme ($X$ is an irreducible curve over the field
$k$ or the spectrum of the ring of algebric integers), $p$ is a smooth point of $X$, $\eta$ is a general point of $X$, then for $\delta = (\eta, p)$ we obtain that $K_{\delta}$
is a $1$-dimensional local field, which coincides with the field $K_p$
from formula~(\ref{f1}) or (\ref{f2}).
\end{ex}
\begin{ex} \em
Now let $X$ be an irreducible algebraic surface over the field $k$.
Let $C$ be an irreducible divisor on $X$ and $p$ a point on $C$.
We suppose that $p$ is a smooth point on $X$ and on $C$. Let $\eta$
be a general point of $X$. We consider $\delta = (\eta, C, p)$.
We fix the local parameter $t \in k(X)$ of the curve $C$ on $X$ at the point $p$ ($t = 0$ is a local equation of the curve $C$ at the point $p$ on $X$),
and local parameter $u \in k(X)$ at the point $p$ on $X$ which is transversal to the local parameter $t$ (the divisor $u = 0$ is transversal to the divisor $t = 0$ at the point $p$).
We fix any affine neighbourhood $\Spec B$ of $p$ on $X$. Then
$$ C_{p} S_{p}^{-1} B = k(p) [[u,t]] $$
$$ C_{C} S_{C}^{-1} C_{p} S_{p}^{-1} B = k(p) ((u))[[t]] $$
$$ K_{\delta} = C_{\eta} S_{\eta}^{-1}
C_{C} S_{C}^{-1} C_{p} S_{p}^{-1} B = k(p) ((u)) ((t)) \mbox{.}
$$
Hence $K_{\delta}$ is a $2$-dimensional local field with the last residue field $k(p)$.
We see that the field $K_{\delta}$
corresponds to the case~\ref{ca1} of the classification theorem~\ref{cl2}.
\end{ex}
\begin{ex} \em
The previous example can be generalized. Let $p_0, \ldots, p_n$
be a flag of irreducible subvarieties on $n$-dimensional alfebraic variety $X$ over the field $k$ such that $\dim p_i = n-i$, $p_{i+1} \subset p_i$ for all $i$ and the point $p_n$ is a smooth point on all subvarieties $p_i$. We can choose a system of local parameters $t_1, \ldots,
t_n \in \oo_{p_n, X}$ of the point $p_n$ on $X$ such that for every $i$
equations $t_1 = 0, \ldots, t_i = 0 $ define a subvariety $p_i$ in some neighbourhood
of the point $p_n$ on $X$. Then according to formula~(\ref{f}) and similar to the previous example we have for $\delta = (p_0, \ldots, p_n)$
$$
K_{\delta} = k(p) ((t_n)) \ldots ((t_1)) \mbox{.}
$$
\end{ex}
\begin{ex} \em
Now we suppose that a scheme $X$ is an arithmetical surface,
i.e., $\dim X = 2$ and we have a flat, projective morphism
$f : X \to Y = \Spec A$, where $A$ is the ring of integers of a number field $K$.
We consider two kinds of integral irreducible $1$-dimensional closed subschemes $C$ on $X$.
\begin{enumerate}
\item
A subscheme $C$ is horizontal, i.e., $f(C) = Y$.
We consider a point
$x \in C$ which is smooth on $X$ and $C$. Let $\delta = (\eta, C, x)$,
where $\eta$ is a general point of $X$. Then
$$
K_{\delta} = L((t)) \mbox{,}
$$
where $t= 0$ is a local equation of $C$ at the point $x$ on $X$ and
$L \supset K_{f(x)} \supset \dbq_p$ is a finite extension.
Thus, $K_{\delta}$
is a $2$-dimensional local field with the finite last residue field.
We see that this field $K_{\delta}$ corresponds to the case~\ref{ca2}
of the classification theorem~\ref{cl2}.
\item
A subscheme $C$ is vertical, i.e. it is a component of a fibre of $f$.
This $C$ is defined over some finite field $\df_q$.
We consider a point
$x \in C$ such that the morphism $f$ is smooth at $x$
and the point $x$ is also defined over the field $\df_q$. Let $\delta = (\eta, C, x)$,
where $\eta$ is a general point of $X$.
Then we applicate formula (\ref{f}). For any affine neighbourhood $\Spec B$ of $x$ on $X$
the ring
$
C_{x} S_{x}^{-1} B $
coincides with the completion $\hat{\oo}_{x, X}$ of the local ring of the point $x$ on $X$.
But since $f$ is a smooth map at $p$, the ring
$$
\hat{\oo}_{x, X}
= \oo_{K_{f(x)}}[[u]] \mbox{.}
$$
Therefore we obtain
$$
K_{\delta} = K_{f(x)}\{\{u\}\} \mbox{,}
$$
where $K_{f(x)} \supset \dbq_p $
is a finite extension.
Thus, $K_{\delta}$
is a $2$-dimensional local field with the finite last residue field.
We see that this field $K_{\delta}$ corresponds to the case~\ref{ca3}
of the classification theorem~\ref{cl2}.
\end{enumerate}
We remark that in these both cases we have a canonical embedding $f^*$ of the $1$-dimensional local field $K_{f(x)}$
to the $2$-dimensional local field
$K_{\delta}$.
\end{ex}
\vspace{0.5cm}
Now we consider only excellent Noetherian schemes $X$
(e.g. a scheme of finite type over a field, over $\dz$,
or over a complete semi-local Noetherian ring; see~\cite[\S 34]{Ma} and~\cite[\S 7.8]{EGA IV}).
We introduce the following notations (see \cite{P2}).
Let $\delta = (p_0, \ldots, p_n)$.
Let a subscheme $X_i = \overline{\{p_i\}} $
be the closure of the point $p_i$ in $X$.
We introduce by induction the schemes $X_{i,\alpha_i}^{\prime}$
in the following diagramm
$$
\begin{array}{ccccccc}
X_0 & \supset & X_1 & \supset & X_2 & \supset & \cdots \\
\uparrow & & \uparrow & & & & \\
X'_0 & \supset & X_{1, \alpha_1} & \supset & \uparrow & & \\
& & \uparrow & & & & \\
& & X'_{1, \alpha_1} & \supset & X_{2, \alpha_2} & & \\
& & & & \uparrow & & \\
& & & & \vdots & & \\
\end{array}
$$
Here $X'$ is the normalization of a scheme $x$
and $X_{i, \alpha_i}$
is an integral irreducible subscheme in $ X'_{i-1, \alpha_{i-1}}$
which is mapped onto $X_i$.
Then by any such diagram we obtain the collection of indices $ (\alpha_1, \ldots \alpha_n)$.
We denote the finite set of all such collections of indices by $\Lambda_{\delta}$.
Such a collection of indices $ (\alpha_1, \ldots \alpha_n) \in \Lambda_{\delta}$
determines a chain of discrete valuations in the following way.
Integral irreducible subvariety $X_{1, \alpha_1}$ of the normal scheme $X'_{0}$
defines the discrete valuation of the field of functions on $X_0$.
The residue field of this discrete valuation is the field of functions on the normal scheme
$X'_{1, \alpha_1}$, and the integer irreducible subscheme $X_{2, \alpha_2}$
defiines the discrete valuation here. We have to proceed further for
$ \ldots, \alpha_3, \ldots, \alpha_n$ in this way.
Moreover, there is the following theorem (~\cite{P2}).(See also~\cite[theorem 3.3.2]{Y}
for the proof).
\begin{Th} \label{tl}
Let $X$
be an integral excellent $n$-dimensional Noetherian scheme.
Then for $\delta = (p_0, \ldots, p_n)$
the ring $K_{\delta}$ is an Artinian ring and
$$
K_{\delta} = \prod_{(\alpha_1, \ldots, \alpha_n) \in \Lambda_{\delta}}
K_{(\alpha_1, \ldots, \alpha_n)} \quad \mbox{,}
$$
where every $ K_{(\alpha_1, \ldots, \alpha_n)} $
is an $n$-dimensional local field.
\end{Th}
\vspace{0.5cm}
\begin{ex} \em
To illustrate this theorem we compute now
the ring $K_{\delta}$
in the following situation.
Let $p$ be a smooth point on an irreducible algebraic surface $X$ over $k$.
Suppose an irreducible curve $C \subset X$ contains the point $p$, but $C$
has at the point $p$ the node singularity, i.e., completed local ring of the point $p$ on
the curve $C$ is $k[[t,u]]/tu$ for some local formal parameters
$u, t$ of the point $p$ on $X$. Let $\delta = (\eta, C, p)$,
where $\eta$ is a general point of $X$.
We fix any affine neighbourhood $\Spec B$ of $p$ on $X.$
Then according to formula~(\ref{f})
$$ C_{p} S_{p}^{-1} B = k(p) [[u,t]] $$
$$ C_{C} S_{C}^{-1} C_{p} S_{p}^{-1} B = k(p) ((u))[[t]] \oplus k(p) ((t))[[u]] $$
$$ K_{\delta} = C_{\eta} S_{\eta}^{-1}
C_{C} S_{C}^{-1} C_{p} S_{p}^{-1} B = k(p) ((u)) ((t)) \oplus k(p) ((t))((u)) \mbox{.}
$$
\end{ex}
\section{Adeles and adelic complexes} \label{adel}
\subsection{Adeles on curves}
\label{adcur}
Let $C$ be a smooth connected algebraic curve over the field $k$.
For any coherent sheaf $\f$ on $C$
we consider an adelic space $\da_C (\f)$:
$$
\da_C (\f) = \left\{ \{f_{p} \} \in \prod_{p \in C} \f \otimes_{\oo_C} K_{p} \quad
\mbox{such that} \quad f_p \in \f \otimes_{\oo_C} \oo_{K_{p}}
\quad
\mbox{for almost all}
\quad
p
\mbox{,}
\right\}
$$
where the product is over all closed points
$p$
of the curve $C$.
We construct the following complex $ {\ad_C}(\f)$:
$$
\begin{array}{ccccc}
\f \otimes_{\oo_C} k(C) & \times & \prod\limits_{p \in C}
\f \otimes_{\oo_C} \oo_{K_{p}} &
\lto & \da_C (\f) \\
a & \times & b & \mapsto & a + b
\mbox{.}
\end{array}
$$
We have the following theorem (for example, see~\cite{S}).
\begin{Th} \label{proposi}
The cohomology groups of the complex
${\ad_C}(\f)$
coincide with the cohomology $H^*(C, \f)$,
where $\f$ is
any coherent sheaf on $C$.
\end{Th}
\proof We give here the sketch of the proof.
We write an adelic complex ${\ad_U}(\f)$ of the sheaf $\f$
for any open subset $U \subset C$.
Taking into account all $U$ we obtain a complex of sheaves $\ad (\f)$ on the curve $C$.
Then for the small affine $U$ we obtain that
the following complex
$$
0 \lto \f(U) \lto \ad_U (\f) \lto 0
$$
is exact, since we can apply the approximation theorem for Dedekind rings over fields.
Therefore the complex $\ad (\f)$ is a resolution of the sheaf $\f$ on $C$.
And by construction this resolution is a flasque resolution of the sheaf $\f$ on $C$.
Therefore it calculates the cohomology of the sheaf $\f$ on the curve $C$.
\subsection{Adeles on higherdimensional schemes} \label{adeles}
In this section we give a generalization of adelic complexes to the schemes
of arbitrary dimensions.
For algebraic surfaces adelic complexes were introduced by A.N. Parshin in~\cite{P1}.
We will give a detail exposition of adelic complexes on algebraic surfaces later
as an application of general machinery, which was constructed
for arbitrary Noetherian schemes
by A.Beilinson in~\cite{B}. For a good exposition and proofs of Beilinson results
see~\cite{H}.
\subsubsection{Definition of adelic spaces.}
We introduce the following notations.
For any Noetherian scheme $X$ let $P(X)$ be the set of points of the scheme $X$.
Consider $p, q \in P(X)$. Define $p \ge q$
if $q \in \overline{\{ p\}}$, i.e., the point $p$ is in closure of the point $q$.
Then $\ge$ is a half ordering on $P(X)$.
Let $S(X)$ be the simplicial set induced by $(P(X), \ge)$,
i.e.,
$$S(X)_m = \{ (p_0, \ldots, p_m) \mid \nu_i \in P(X); p_i \ge p_{i+1} \}$$
is the set of $m$-simplices of $S(X)$
with the usual boundary $\delta_i^n$ and degeneracy maps $\sigma_i^n$
for $n \in \dn$, $0 \le i \le n$.
Let $K \subset S(X)_n$.
For $p \in P(V)$ we denote
$$
{}_{p} K = \{ (p_1 > \ldots > p_{n}) \in S(V)_{n-1}
\mid (p > p_1 \ldots > p_n) \in K \} \mbox{.}
$$
Let ${\rm \bf QS}(X)$ and $ {\rm \bf CS}(X)$
be the category of quasicoherent and coherent sheaves on the scheme $X$.
Let ${\rm \bf Ab}$ be the category of Abelian groups.
We have the following proposition, see \cite{B}, \cite{H}, \cite{H1},
which is also a definition.
\begin{prop} \label{ind}
Let $S(X)$
be the simplicial set associated to the Noetherian scheme $X$.
Then there exist for integer $n > 0$, $K \in S(X)_n$
functors
$$
\da(K, \cdot) \; : \; {\rm \bf QS}(X) \lto {\rm \bf Ab}
$$
uniquely determined by the properties \ref{pr1}, \ref{pr2}, \ref{pr3},
which are additive and exact.
\begin{enumerate}
\item \label{pr1}
$\da (K, \cdot)$
commutes with direct limits.
\item \label{pr2}
For $n = 0$, a coherent sheaf $\f$ on $X$
$$
\da (K, \f) = \prod_{p \in K}
\mathop{\mathop{\lim}_{\leftarrow}}_l
\f_p / m_p^l \f_p \mbox{.}
$$
\item \label{pr3}
For $n > 0$, a coherent sheaf $\f$ on $X$
$$
\da (K, \f) = \prod_{p \in P(V)}
\mathop{\mathop{\lim}_{\leftarrow}}_l
\da ( {}_{\eta} K, \f_p / m_p^l \f_p )
$$
\end{enumerate}
\end{prop}
\begin{nt} \em
Since any quasicoherent sheaf on an exellent Noetherian scheme
is a direct limit of coherent sheaves, we can apply property~\ref{pr1}
of this proposition to define $\da (K, \f)$ on quasicoherent sheaves.
\end{nt}
\subsubsection{Local factors.}
The definition of $ \da (K, \f)$ is a definition of inductive kind.
By induction of the definition there is the following proposition, \cite[prop. 2.1.4.]{H}.
\begin{prop}
For integer $n > 0$, $K \subset S(X)_n$, a quasicoherent sheaf $\f$ on $X$
$$
\da (K, \f) \subset \prod_{\delta \in K} \da (\delta ,\f) \mbox{.}
$$
The inclusion is a transformation of functors.
\end{prop}
From this proposition we see that $\da (K, \f)$ is a kind of complicated adelic product inside of
$\prod\limits_{\delta \in K} \da (\delta,\f)$.
Therefore it is important to study the local factors $ \da (\delta,\f)$ for $\delta \in S(X)_n$. We have the following two propositions from \cite{H} about these local factors..
\begin{prop}
Let $\delta = (p_0, \ldots , p_n) \in S(X)_n$.
Let $U$ be an open affine subscheme which contains the point $p_n$
and therefore all of $\delta$. Let $M = \f(U)$. Then for a quasicoherent sheaf $\f$
$$
\da(\delta, \f) = \da (\delta, \tilde{M}) \mbox{,}
$$
where $\tilde{M}$ is a quasicoherent sheaf on affine $U$ which corresponds to $M$.
\end{prop}
In the following proposition local factors $\da (\delta, \f)$
are computed for affine schemes.
\begin{prop}
Let $X = \Spec R$ and $\f = \tilde{M}$
for some $R$-module $M$. Further let $\delta \in (p_0, \ldots, p_n) \in S(X)_n$.
Then
\begin{equation} \label{for}
\da (\delta, \f) = C_{p_0} S_{p_0}^{-1} \ldots C_{p_n} S_{p_n}^{-1} R \otimes_R M
\mbox{.}
\end{equation}
$C_{p_0} S_{p_0}^{-1} \ldots C_{p_n} S_{p_n}^{-1} R$
is a flat Noetherian $R$-algebra. And for finitely generated $R$-modules
$$
C_{p_0} S_{p_0}^{-1} \ldots C_{p_n} S_{p_n}^{-1} R \otimes_R M =
C_{p_0} S_{p_0}^{-1} \ldots C_{p_n} S_{p_n}^{-1} M \mbox{.}
$$
\end{prop}
We compare now formula (\ref{for}) from the last proposition and
formula (\ref{f}) for $K_{\delta}$. We obtain that for an $n$-dimensional
Noetherian scheme $X$, for $\delta \in S(X)_n$ and a quasicoherent sheaf $\f$
$$
\da(\delta, \f) = K_{\delta} \otimes_{\oo_X} \f \mbox{.}
$$
\begin{nt} \em
Due to theorem \ref{tl} it means that for $\delta \in S(X)_n$ local factors $\da(\delta, \oo_X)$
on exellent Noetherian integral $n$-dimensional scheme $X$
are finite products of $n$-dimensional local fields.
\end{nt}
\subsubsection{Adelic complexes.}
\label{adcom}
Now we want to define adelic complexes on the scheme $X$.
We have the simplicial set $S(X)$ with the usual boundary maps $\delta_i^n$
and degeneracy maps $\sigma_i^n$ for $n \in \dn$, $0 \le i \le n$.
We remark the following property, see~\cite[prop. 2.1.5.]{H}.
\begin{prop} \label{lp}
Let $K, L, M \subset S(X)_n $
such that $K \cup M = L$, $K \cap M = \emptyset$.
Then there are natural transformations $i$ and $\pi$ of functors
$$ i(\cdot) : \da (K, \cdot) \lto \da (L, \cdot)
$$
$$
\pi(\cdot) : \da (L, \cdot) \lto \da(M, \cdot)
$$
such that the following diagram is commutative and has split-exact lines for all
quasicoherent sheaves $\f$ on $X$
$$
\begin{array}{ccccccccc}
0 & \lto & \da (K, \f) & \stackrel{i(\f)}{\lto} & \da (L, \f) & \stackrel{\pi(\f)}{\lto}
& \da (M, \f) & \lto & 0
\\
& & \downarrow & & \downarrow & & \downarrow & \\
0 & \lto & \prod\limits_{\delta \in K} \da (\delta, \f) & {\lto} &
\prod\limits_{\delta \in L} \da (\delta, \f) & {\lto}
& \prod\limits_{\delta \in M} \da (\delta, \f) & \lto & 0 \mbox{.}
\end{array}
$$
\end{prop}
This proposition is proved by induction on definition-proposition~\ref{ind}.
\begin{defin}
Let $K \subset S(X)_0$, $\f$ a quasicoherent sheaf. Then let
$$
d^0 (K, \f) \; : \; \Gamma(X, \f) \lto \da(K, \f)
$$
be the canoical map, which is a natural transformation of functors.
\end{defin}
\begin{defin} \label{ddef}
Let $K \subset S(X)_{n+1}$, $L \subset S(X)_n$,
$\delta_i^{n+1} K \subset L$ for some $i \in \{0, \ldots, n+1 \}$.
We define transformations of functors
$$
d_i^{n+1} (K, L, \cdot) \; : \; \da(L, \cdot) \lto \da(K, \cdot)
$$
by the following properties.
\begin{enumerate}
\item If $i= 0$ and $\f$ is a coherent sheaf on $X$, then we apply the functor
$ \da( {}_{p} K, \cdot)$ to $\f \to \f_p/m_p^l \f_p $ and compose this map
with the projection of proposition~\ref{lp} for $L \supset {}_{p} K $.
We use the universal property of $\prod\limits_{p \in P(X)} \lim\limits_{\leftarrow}$.
\item
If $i = 1$, $n = 0$ and $\f$ is a coherent sheaf on $V$,
then the projection of proposition~\ref{lp} for $L \supset {}_{p} K$
is composed with the following map. The maps $ d^0 ( {}_{p} K, \f_p / m_p^l \f_p)$
form a projective system for $l \in \dn$ and we apply
$\prod\limits_{p \in P(X)} \lim\limits_{\leftarrow}$ to it.
\item
If $i > 0$, $n >0$, $\f$ is a coherent sheaf, then the hypothesis $\delta_i^{n+1} K \subset L$
implies $\delta_{i-1}^n ({}_{p} K) \subset {}_{p} L$ for all $p \in P(X)$. Set
$$
d_i^{n+1} (K, L, \f) = \prod_{p \in P(X)}
\mathop{\lim\limits_{\leftarrow}}\limits_{l \in \sdbn}
d_{i-1}^n( {}_{p} K, {}_{p} L, \f_p / m_p^l \f_p)
\mbox{.}
$$
\item
$
d_i^{n+1} (K, L, \cdot)
$
commutes with direct limits.
\end{enumerate}
\end{defin}
\vspace{0.5cm}
For $\delta \in S(V)_{n+1}$
and $\delta' = \delta_i^{n+1} (\delta) \in S(V)_n $
by definition~\ref{ddef} we have local
boundary map
$$
d_i^{n+1} \quad : \quad
\da(\delta' , \f) \lto \da(\delta , \f) \mbox{.}
$$
For $K \subset S(X)_{n+1}$, $L \subset S(X)_n$ with
$\delta_i^{n+1} K \subset L$
we define
$$
D_i^{n+1} (\f) \quad : \quad
\prod_{\delta \in L} \da(\delta, \f) \lto \prod_{\delta \in K} \da(\delta, \f ) \mbox{,}
$$
where $(x_{\delta})_{\delta \in L} \mapsto (y_{\delta})_{\delta \in K}$
is given by
$y_{\delta} = d_i^{n+1} (x_{\delta'})$.
For the computations of boundary maps it is usefull the following proposition, which describes the boundary maps $d_i^{n+1}$ by means of the boundary maps $D_i^{n+1}$ on the product of local factors.
\begin{prop}
Let $K \subset S(X)_{n+1}$, $L \subset S(X)_n$
with $\delta_i^{n+1} K \subset L$. The following diagram commutes
$$
\begin{array}{ccc}
\da(L, \f) & \stackrel{d_i^{n+1}}{\lto} & \da (K, \f) \\
\downarrow & & \downarrow \\
\prod\limits_{\delta \in L} \da(\delta, \f) & \stackrel{D_i^{n+1}}{\lto} & \prod\limits_{\delta \in K} \da(\delta, \f ) \mbox{.}
\end{array}
$$
\end{prop}
This porposition is proved by induction of definitions.
\vspace{0.5cm}
Now for the scheme $X$ we consider
the set $S(X)_n^{(red)}$ of non degenerate $n$-dimensional simplices.
(A simplex $(p_0, \ldots p_n)$ is nondegenerate if $p_i \ne p_{i+1}$ for any $i$.)
For any $n \ge 0$, for any quasicoherent sheaf $\f$ on $X$ we denote
$$
\da^n_X (\f) = \da (S(X)_n^{(red)}, \f) \mbox{.}
$$
We consider the boundary maps
$$
d_i^{n+1} \quad : \quad \da^n_X ( \f) \lto \da^{n+1}_X (\f) \mbox{.}
$$
There are the following equalities for these boundary maps:
\begin{equation} \label{equ}
d_j^n d_i^n = d_i^n d_{j-1}^n \qquad \qquad i < j \mbox{.}
\end{equation}
For $n \ge 1$ we define $d_n : \da^{n-1}_X (\f) \lto \da^{n}_X ( \f)$ by
\begin{equation} \label{equ1}
d^{n} = \sum_{j = 0}^{n} (-1)^j d_j^n \mbox{.}
\end{equation}
We have the following proposition,
which is also a definition.
\begin{prop}
Differentials $d^n$ make $\da^*_X ( \f)$
into a cohomological complex of Abelian groups $\ad_X (\f)$,
which we call the adelic complex of the sheaf $\f$ on $X$.
\end{prop}
\proof It follows by direct calculations with formulas~(\ref{equ}) and (\ref{equ1}).
We have the following theorem.
\begin{Th}
For any quasicoherent sheaf $\f$ on a Noetherian scheme $X$
$$
H^i (\ad_X (\f)) = H^i (X, \f) \mbox{.}
$$
\end{Th}
\proof
The proof of this theorem is a very far generalization
of the proof of theorem \ref{proposi}. Indeed, for any open subscheme $U \subset X$
we consider the following complex
\begin{equation} \label{cc}
0 \lto \f(U) \stackrel{d^0}{\lto} \da^0_U ( \f) \stackrel{d^1}{\lto} \da^1_U ( \f)
\stackrel{d^2}{\lto} \ldots \stackrel{d^n}{\lto} \da^n_U ( \f)
\stackrel{d^{n+1}}{\lto} \ldots
\end{equation}
Taking into account all $U$, we obtain that this complex is a complex of sheaves
on $X$. Moreover, by proposition~\ref{lp} the sheaves in this complex are flasque sheaves,
since $S(U)_n^{(red)} \subset S(X)_n^{(red)}$ for any $n$.
By \cite[th.~4.1.1]{H} for any affine scheme $U$ the complex~(\ref{cc})
is an exact complex. Therefore we constructed a flasque resolution of the sheaf $\f$ on $X$.
This resolution calculates the cohomology of the sheaf $\f$ on $X$.
\begin{nt} \em
We constructed here reduced adeles, since we used only nondegenerate simplices in $S(X)$.
These reduced adeles really carry information and they
are the part of the full complex, see \cite{H}.
\end{nt}
\subsection{Adeles on algebraic surfaces.} \label{surface}
In this section we verify that the general adelic complex constructed in previous section
coincides with the adelic complex for curves from section~\ref{adcur}.
We give also application of general construction of adelic complexes
to algebraic surfaces.
We consider a smooth connected algebraic curve $C$ over field $k$.
The set $S(C)_0^{(red)}$ consists of the general point $\eta$ and all closed points $p$ of the curve $C$. The set $S(C)_1^{(red)}$ consists of all pairs $(\eta, p)$.
For any coherent sheaf $\f$ on $C$ we can compute by definition
$$
\da_C^0 (\f) =
\f \otimes_{\oo_C} k(C) \; \times \; \prod_{p \in C}
\f \otimes_{\oo_C} \oo_{K_{p}} \mbox{.}
$$
Let subset $K \subset S(C)_0^{(red)} $ consists of all closed points of the curve $C$.
We have by definition
$$
\da_C^1 (\f) = \da (K , \f_{\eta}) = \da (K , \f \otimes_{\oo_C} k(C)) =
$$
$$
= \da (K, \mathop{\lim_{\lto}}_{D \in \Div(C)} \f \otimes_{\oo_C} \oo_C(D) =
$$
$$
= \mathop{\lim_{\lto}}_{D \in \Div(C)} \da (K, \f \otimes_{\oo_C} \oo_C(D) ) =
$$
$$
= \mathop{\lim_{\lto}}_{D \in \Div(C)} \prod_{p \in C} \f \otimes_{\oo_C} \oo_{K_p}(D) \mbox{.}
$$
Therefore the adelic complex constructed in section~\ref{adcom}
coincides with the adelic complex for curves from section~\ref{adcur}.
\vspace{1cm}
We consider a smooth connected algebraic surface $X$ over field $k$.
The set $S(X)_0^{(red)}$
consists of the general point $\eta$ of $X$,
general points of all irreducible curves $C \subset X$,
all closed points $p \in X$.
The set $S(X)_1^{(red)}$ consists of all pairs $(\eta, C)$,
$(\eta, p)$ and $(C, p)$. (In our notations we identify
the general point of a curve $C \subset X$
with the curve $C$.)
The set $S(X)_2^{(red)}$ consists of all triples $(\eta, C, p)$.
\vspace{0.5cm}
We consider $\delta = (\eta, C, p)$.
Let $f$ be a natural map from the local ring $\oo_{p,X}$ to the completion
$\hat{\oo}_{p,X}$. The curve $C$ defines a prime ideal $\C^{\,'}$ in the ring $\oo_{p,X}$.
Let $\C_1, \ldots, \C_n$ be all prime ideals of height $1$ in the ring $\hat{\oo}_{p,X}$
such that for any $i$ $f^{-1} (\C_i) = \C^{\,'}$.
Any such $\C_i$ we will call a germ of $\C$ at $p$.
For any such germ $\C_i$ we define a two-dimensional
local field
$$
K_{p,\C_i} =
\Frac \; {\mathop {\Lim_{\longleftarrow}}_l} \, (
\mathop{\hoo_{p,X}}\nolimits_{ (\C_i ) } / {\C}_i^{\, l}
\mathop{\hoo_{p,X}}\nolimits_{ (\C_i ) } ) \mbox{.} $$
The ring $\mathop{\hoo_{x,X}}_{(\C_i)}$
is a localization of the ring
$\hoo_{p,X}$
along the prime ideal $\C_i$.
Then according to formula~(\ref{f}),
we have (see~\cite{FP}, \cite{P1})
$$
\da (\delta, \oo_X) = K_{\delta} = \bigoplus_{i=1}^{i=n} K_{p, \C_i} \mbox{.}
$$
Similarly we have
$$
\da ((C, p), \oo_X ) = \bigoplus_{i=1}^{i=n} \oo_{ K_{p, \C_i}} \mbox{.}
$$
We compute by definition
$$
\da ((\eta, C), \oo_X ) = K_C \mbox{,}
$$
where the field $K_C$ is the completion of the field $k(X)$ along the discrete valuation
given by irreducible curve $C$ on $X$.
And from definition we obtain that $
\da ((\eta, p), \oo_X)
$ is a subring in $\Frac(\hat{\oo}_{p, X} )$
generated by subrings $k(X)$ and $\hat{\oo}_{p,X}$.
We denote this subring by $K_p$.
By definition we compute
$$
\da ((\eta), \oo_X) = k(X) \mbox{,}
$$
$$
\da ((C), \oo_X) = \oo_{K_C} \mbox{,}
$$
$$
\da ( (p), \oo_X ) = \hat{\oo}_{p,X} \mbox.
$$
\begin{nt} \label{remar} \em
The local boundary maps $d_i^n$ give natural embeddings of rings $\da ((\eta), \oo_X) $,
$\da ((C), \oo_X)$, $\da ( (p), \oo_X )$, $\da ((\eta, p), \oo_X)$,
$\da ((\eta, C), \oo_X )$,
$ \da ((C, p), \oo_X )$ to the ring $K_{\delta}$.
\end{nt}
By definition we have
$$
\da^0_X (\oo_X) \; = \;
k(X) \; \times \prod_{C \subset X} \oo_{K_C} \; \times \; \prod_{p \in X} \hat{\oo}_{p,X}
\mbox{.}
$$
\vspace{0.5cm}
From proposition~\ref{ind} and similarly to the case of algebraic curve we
compute the ring $ \da_X^2 (\oo_X) $, see details in~\cite{FP}, \cite{P2}.
For any prime ideal $\C \subset \hat{\oo}_{p, X}$ of height $1$ we define the subring
$\hat{\oo}_{p,X} (\infty \C)$ of $ K_{p, \C}$:
$$\hat{\oo}_{p,X} (\infty \C) = \mathop{\lim\limits_{\lto}}\limits_l t_{\C}^{-l}
\hat{\oo}_{p,X} \mbox{,} $$
where $t_{\C}$ is a generator
of ideal $\C$ in $\hat{\oo}_{p,X}$.
The ring $\hat{\oo}_{p,X} (\infty \C)$ does not depend on
the choice of $t_{\C}$.
By $p \in \C \subset X$ we denote a germ at $p$ of an irreducible curve on $X$.
Now we have
$$
\da_X^2 (\oo_X) \; =
\; \{ f_{p,\C} \} \in \prod_{p \in \C \subset X} K_{p,\C} \qquad \mbox{under the following two conditions.}
$$
\begin{enumerate}
\item There exists a divisor $D$ on $X$ such that for any $p \in \C \subset X$
$$
\nu_{K_{p,\C}} ( f_{p,\C} ) \ge \nu_{\C} (D) \mbox{.}
$$
\item For any irreducible curve $C \subset X$, any integer $k$ and all except
a finite number of points $p \in C$ we have that
inside of group $(K_{p,\C} \; \mod \C^{\, k} \oo_{K_{p, \C}})$
$$
f_{p,C} \; \mod \C^{\, k} \oo_{K_{p, \C}} \; \in \;
\hat{\oo}_{p,X} (\infty \C) \; \mod \C^{\, k} \oo_{K_{p, \C}} \mbox{.}
$$
Here we supposed that the curve $C$ has at $p$ a germ $\C$.
\end{enumerate}
\vspace{0.5cm}
We have
$$
\da^1_X (\oo_X) = (\prod_{C \subset X} K_C) \cap \da^2_X (\oo_X) \; \times
\; (\prod_{p \in X} K_p) \cap \da^2_X (\oo_X) \;
\times \;
(\prod_{p \in \C \subset X }
\oo_{K_{p,\C}}) \cap \da^2_X (\oo_X) \mbox{,}
$$
where we take the intersection inside of $\prod\limits_{p \in \C \subset X} K_{p, \C}$
due to remark \ref{remar} and diagonal embeddings
$$\prod\limits_{C \subset X} K_C \lto
\prod\limits_{p \in \C \subset X } K_{p,\C} $$ and
$$\prod\limits_{p \in X} K_p \lto \prod\limits_{p \in \C \subset X } K_{p,\C} \mbox{.}$$
From formula \ref{equ1} and explicit description of rings $\da^*_X(\oo_X)$ it is easy to
see differentials $d^n$ in the complex $\ad_X (\oo_X)$ ( \cite{FP}, \cite{P2}).
Indeed, let
$$
A_0 = k(X) \mbox{,} \qquad A_1 = \prod_{C \subset X} \oo_{K_C} \mbox{,} \qquad
A_2 = \prod_{p \in X} \hat{\oo}_{p,X} \mbox{,}
$$
$$
A_{01} = (\prod_{C \subset X} K_C) \cap \da^2_X (\oo_X) \mbox{,} \qquad \qquad
A_{02} =
(\prod_{p \in X} K_p) \cap \da^2_X (\oo_X) \mbox{,}
$$
$$
A_{12} =
(\prod_{p \in \C \subset X }
\oo_{K_{p,\C}}) \cap \da^2_X (\oo_X) \mbox{,} \qquad \qquad
A_{012} = \da^2_X (\oo_X)
\mbox{.}
$$
Then adelic complex $\ad_X (\oo_X)$ is
$$
\begin{array}{ccccc}
A_0 \oplus A_1 \oplus A_2 & \lto & A_{01} \oplus A_{02} \oplus A_{12} &
\lto & A_{012} \\
(a_0, a_1, a_2) & \mapsto & (a_1 - a_0, a_2 - a_0, a_2 -a_1) &
& \\
& & (a_{01}, a_{02}, a_{12} &
\mapsto & a_{01} - a_{02} + a_{12} \mbox{.}
\end{array}
$$
\begin{nt} \label{interes} \em
We remark the following interesting property,
see \cite[remark 5]{P5}, \cite{FP}.
For any subset $I \subset [0,1,2]$ we have
an embedding $A_I \hookrightarrow A_{012}$. Now for any subsets $ I, J \subset [0,1,2]$
we have that inside of group $A_{012}$
$$ A_I \cap A_J = A_{I \cap J} \mbox{.}
$$
This property is also true for corresponding components of adelic complex of any locally free sheaf on $X$.
\end{nt}
\section{Restricted adelic complexes} \label{sect4}
In this section we describe restricted adelic complexes.
The main difference of restricted adelic complexes from adelic complexes constructed
in section~\ref{adel} is that restricted
adelic complexes are connected with one fixed chain (or flag) of irreducible subvarieties
of a scheme $X$.
Restricted adelic complexes come from so-called Krichever correspondence,~\cite{P5},~\cite{Os},
but see also~\cite{P4}
for connections with the theory of $\zeta$-functions of algebraic
curves.
Restricted adelic complexes on algebraic curves come originally from the
theory of integrable systems, see~\cite{SW}. For algebraic surfaces restricted adelic complexes
were constructed by A.N. Parshin in \cite{P5}.
For higher dimensional schemes restricted adelic complexes were constructed by author
in~\cite{Os}.
\subsection{Restricted adelic complexes on algebraic curves and surfaces.} \label{curve}
We consider an irreducible algebraic curve $C$ over $k$.
We fix a smooth closed point $p \in C$.
For any coherent sheaf $\f$ of rank $r$ on $C$ we consider the following complex
\begin{equation} \label{cad}
\begin{array}{ccc}
\Gamma (C \setminus p, \f) \; \oplus \;
(\f \otimes_{\oo_C} \oo_{K_p} ) &
\lto & \f \otimes_{\oo_C} K_p \\
(a_0 \oplus a_1) & \mapsto & a_1 -a_0
\mbox{.}
\end{array}
\end{equation}
We note that for the torsion free sheaf $\f$ we have natural embeddings
$$
\f \otimes_{\oo_C} \oo_{K_p} \lto \f \otimes_{\oo_C} K_p
$$
$$
\Gamma (C \setminus p, \f) \lto \f \otimes_{\oo_C} K_p \mbox{,}
$$
where the last embedding is given by
$$
\Gamma (C \setminus p, \f) \lto
\Gamma (\Spec \oo_p \setminus p, \f) \lto
\Gamma (\Spec \oo_{K_p} \setminus p, \f) = \f {\otimes}_{\oo_C} K_p \mbox{.}
$$
Besides, after the choice of basis of module $\f_p$ over the ring $\oo_p$ we have
$$
\f \otimes_{\oo_C} K_p = K_p^{\oplus r} \mbox{.}
$$
Therefore in this case complex~(\ref{cad}) is a complex of subgroups inside of $K_p^{\oplus r}$, where $K_p$ is a $1$-dimensional local field.
There is the following theorem, see, for example, \cite{P5}, \cite{P4}.
\begin{Th} \label{th33}
The cohomology groups of complex~(\ref{cad})
coincide with the cohomology groups $H^*(C, \f)$.
\end{Th}
Chain of quasi-isomorphisms between complex~(\ref{cad}) and adelic complex $\ad_C(\f)$
was constructed in~\cite{P5}. It proves the theorem~\ref{th33}.
We remark that it is important for the proof, that $C \setminus p$
is an affine curve, see also remark~\ref{Chech} below.
The complex~(\ref{cad}) is called {\em restricted} adelic complex on $C$
associated with the point $p$.
\vspace{1cm}
Now let $X$ be an algebraic surface over $k$. We fix
an irreducible curve $C \subset X$,
and a point $p \in C$ which is a smooth point on both $C$ and $X$.
Let $\f$ be a torsion free coherent sheaf on $X$.
We introduce the following notations from~\cite{P5}, \cite{P4}. Let $x \in C$,
$$
\hat{\f}_x \qquad \mbox{,} \qquad \hat{\f}_C \qquad \mbox{,} \qquad \hat{\f}_{\eta}
$$
be completions of stalks of the sheaf $\f$ at scheme points given by $x$ , irreducible curve $C$
and general point $\eta$ of $X$ correspondingly.
$$
B_x (\f) = \bigcap_{\D \ne C} (( \hat{\f}_x \otimes K_x) \cap ( \hat{\f}_x \otimes \oo_{K_{x,\D}}) \mbox{,}
$$
where $\D$ runs over all germs at $x$ of irreducible curves
on $X$, which are not equal to $C$, and the intersection is done inside of the group $ \hat{\f}_x \otimes K_x$,
$$
B_C (\f) = (\hat{\f}_C \otimes K_C) \cap \left( \bigcap_{x \ne p} B_x \right) \mbox{,}
$$
where the intersection is done inside of $\hat{\f}_p \otimes K_{x, \C}$
for all closed points $ x \ne p $ of $C$ and all germs $\C$ at $x$ of $C$,
$$
A_C (\f) = B_C (\f) \cap \hat{\f}_C \mbox{,}
$$
$$
A(\f) =
\hat{\f}_{\eta} \cap \left( \bigcap_{x \in X -C} \hat{\f}_p \right) \mbox{.}
$$
We note that
$$
A(\f) = \Gamma (X - C, \f)
$$
and for the smooth point $x \in C$ the space $B_x (\oo_X)$ coincides with the space
$\hat{\oo}_{p,X} (\infty \C)$
from section~\ref{surface}.
The following theorem was proved in \cite[th. 3]{P5}.
\begin{Th} \label{teres}
Let $X$ be an irreducible algebraic surface over a field $k$,
$C \subset X$ be an irreducible curve,
and $p \in C$ be a smooth point on both $C$ and $X$.
Let $\f$ be a torsion free coherent sheaf on $X$.
Assume that the surface $X - C$ is affine. Then there exists a chain of quasi-isomorphisms
between adelic complex $\ad_X ( \f)$ and the following
complex
\begin{equation} \label{rescom}
\begin{array}{ccccc}
A(\f) \oplus A_C (\f) \oplus \hat{\f}_p & \lto & B_C (\f) \oplus B_p (\f) \oplus
(\hat{\f}_p \otimes {\oo}_{K_{p, \C}}) &
\lto & \hat{\f}_p \otimes K_{p, \C} \\
(a_0, a_1, a_2) & \mapsto & (a_1 - a_0, a_2 - a_0, a_2 -a_1) &
& \\
& & (a_{01}, a_{02}, a_{12} &
\mapsto & a_{01} - a_{02} + a_{12} \mbox{.}
\end{array}
\end{equation}
\end{Th}
Under the conditions of this theorem the cohomology groups of the complex~(\ref{rescom})
coincide with the cohomology groups of adelic complex complex $\ad_X ( \f)$,
and therefore they are equal to $H^*(X, \f)$.
\begin{defin}
Complex~(\ref{rescom}) is called restricted adelic complex on $X$ associated with
the curve $C$ and the point $p \in C$.
\end{defin}
There is the following proposition, see~\cite[prop.4]{P5}.
\begin{prop} \label{ppp}
Under the conditions of theorem~\ref{teres}
we suppose also that $\f$ is a locally free sheaf, $X$ is a projective variety, the local rings of $X$ are Cohen-Macaulay
and the curve $C$ is locally complete intersection. Then, inside the field
$K_{x, \C}$, we have
$$
B_C (\f) \cap B_p (\f) = A (\f) \mbox{.}
$$
\end{prop}
Let the rank of $\f$ be $r$.
Then after the choice of basis of $1$-dimensional free $\hat{\oo}_{p,X}$-module $\hat{\f_p}$
we have
$$
\hat{\f_p} = \hat{\oo}_{p,X}^{\oplus r} \mbox{,}
$$
$$
\hat{\f}_p \otimes K_{p, \C} = K_{p, \C}^{\oplus r} \mbox{,}
$$
$$
\hat{\f}_p \otimes \hat{\oo}_{K_{p, \C}} = \hat{\oo}_{K_{p, \C}}^{\oplus r} \mbox{,}
$$
$$
B_p(\f) = B_p ^{\oplus r} = \hat{\oo}_{p,X} (\infty \C)^{\oplus r} \mbox{,}
$$
$$
A_C (\f) = A (\f) \cap \hat{\oo}_{K_{p, \C}}^{\oplus r} \mbox{,}
$$
where the last intersection is done inside of $K_{p, \C}^{\oplus r}$.
Now due to proposition~\ref{ppp} we obtain that complex~(\ref{rescom})
is a complex of subgroups of $K_{p, \C}^{\oplus r}$
and is uniquely determined by one subgroup $B_C (\f)$ of $K_{p, \C}^{\oplus r}$.
In fact, all the other components of complex~(\ref{rescom}) can be defined
by intersections of $B_C (\f)$ with subgroups of $K_{p, \C}^{\oplus r}$,
which do not really depend on the sheaf $\f$.
\subsection{Restricted adelic complexes on higherdimensional schemes.}
In this section we construct restricted adelic complexes for arbitrary schemes.
These complexes will generalize corresponding complexes from section~\ref{curve}.
\subsubsection{General definitions.}
Let $X$ be
a Noetherian separated scheme.
Consider a flag of closed subschemes
$$
X \supset Y_0 \supset Y_1 \supset \ldots \supset Y_n
$$
in $X$.
Let $J_j$ be the ideal sheaf of $Y_j$ in $X$, $0 \le j \le n$.
Let $i_j$ be the embedding
$Y_j \hookrightarrow X$.
Let $U_i$ be an open subscheme of $Y_i$
complementing $Y_{i+1}$, $0 \le i \le n-1$.
Let $j_i : U_i \hookrightarrow Y_i$ be the open embedding
of $U_i$
in $Y_i$, $0 \le i \le n-1$.
Put
$U_n = Y_n$
and let $j_n$ be
the identity morphism from
$U_n$ to $Y_n$.
Assume that every point
$x \in X$
has an open affine neighbourhood
$U \ni x$
such that
$U \cap U_i$ is an affine scheme for any $0 \le i \le n$.
In what follows, a flag of subschemes
$\{Y_i, \; 0 \le i \le n \}$ with this condition
is called a flag with { \em locally affine complements.}
\begin{nt} { \em
The last condition (existence of locally affine complements) holds, for example,
in the following cases:
\begin{itemize}
\item $Y_{i+1}$
is the Cartier divisor on
$Y_i$ for $0 \le i \le n-1$), and
\item
$U_i$ is an affine scheme for any $0 \le i \le n-1$ (the intersection of two open affine subschemes on a separated scheme
is an affine subscheme.)
\end{itemize}
}
\end{nt}
\bigskip
Consider the $n$-dimensional
simplex
and its standard simplicial set (without degeneracy).
To be precise, consider the set:
$$
(\{ 0\}, \{ 1\}, \ldots, \{ n\})
$$
(all the integers between $0$ and $n$.)
Then the simplicial set $S = \{ S_k \}$ is given by
\begin{itemize}
\item $S_0 \eqdef \{\eta \in \{ 0\}, \{ 1\}, \ldots, \{ n\} \} $.
\item
$ S_k \eqdef \{ (\eta_0, \ldots, \eta_k),
\quad \mbox{where} \quad \eta_l \in S_0 \quad \mbox{and} \quad
\eta_{l-1} < \eta_l \} $.
\end{itemize}
The boundary map $\partial_i$ ($0 < i < k$)
is given by eliminating the
$i$-th component of the vector $(\eta_0, \ldots, \eta_k)$
to give
the $i$-th face of $(\eta_0, \ldots, \eta_k)$.
Let $ {\rm \bf QS}(X)$ be the category of quasicoherent sheaves on $X$.
Let ${\rm \bf Sh}(X) $ be the category of sheaves of Abelian groups on $X$.
Let $f : Y \longrightarrow X$ be a morphism of schemes.
Then $f^*$ always denotes
the pull-back functor in the category
of sheaves of Abelian groups, and $f_*$ is
the direct image functor
in the category of sheaves of Abelian groups.
We give the following definition from~\cite{Os}.
\begin{defin}
For any $(\eta_0, \ldots , \eta_k) \in S_k$
we define a functor
$$
V_{(\eta_0, \ldots, \eta_k)} \; : \; {\rm \bf QS}(X) \lto {\rm \bf Sh}(X)
\mbox{,}
$$
which is uniquely determined by the following inductive conditions:
\begin{enumerate}
\item
$V_{(\eta_0, \ldots, \eta_k)} $
commutes with direct limits.
\item
If $\f$ is a coherent sheaf and $\eta \in S_0$,
then
$$
V_{\eta}(\f) \eqdef
\mathop{\pl}\limits_{m \in \bf{N}}
(i_{\eta})_* (j_{\eta})_*
(j_{\eta})^* (\f / J^m_{\eta} \f) \mbox{.}
$$
\item
If $\f$ is a coherent sheaf
and $(\eta_0, \ldots, \eta_k) \in S_k$, $k \ge 1$,
then
$$
V_{(\eta_0, \eta_1, \ldots, \eta_k)}(\f) \eqdef
\mathop{\pl}\limits_{m \in {\bf N}}
V_{(\eta_1, \ldots, \eta_k)}
\left( (i_{\eta_0})_* (j_{\eta_0})_* (j_{\eta_0})^*
(\f / J^m_{\eta_0} \f) \right) \mbox{.}
$$
\end{enumerate}
\end{defin}
We'll use sometimes
the equivalent notation for
$V_{(\eta_0, \ldots, \eta_k)}(\f)$,
in which the closed subschemes are indicated explicitly:
$$
V_{(\eta_0, \ldots, \eta_k)}(\f) =
V_{(Y_{\eta_0}, \ldots, Y_{\eta_k})}(X, \f) \mbox{.}
$$
There is
the following proposition, \cite[prop. 1]{Os}, which is proved by induction.
\begin{prop} \label{predl1}
Let $ \sigma = (\eta_0, \ldots, \eta_k) \in S_k$.
Then the following assertions hold.
\begin{enumerate}
\item \label{pun1}
The functor $V_{\sigma} :
\QS(X) \lto \Sh(X)$ is well defined.
\item \label{pun2}
The functor $V_{\sigma}$
is exact and additive.
\item \label{pun3}
The functor $V_{\sigma}$
is local on $X$, that is,
for any open
$U \subset X$
and any quasicoherent sheaf
$\f$ on $X$ we have
$$
V_{(Y_{\eta_0}, \ldots, Y_{\eta_k})}(X, \f) \mid_U =
V_{(Y_{\eta_0} \cap U, \ldots, Y_{\eta_k} \cap U )} (U, \f \mid_U) \mbox{.}
$$
(if $Y_j \cap U = \o$,
then $Y_i \cap U$ is the empty subscheme of $U$
defined by the ideal sheaf $\oo_U$.)
\item For any quasicoherent sheaf $\f $ on $X$
the sheaf $V_{(\eta_0, \ldots, \eta_k)}(\f)$ is a sheaf of
$\oo_X$-modules
supported on the subscheme $Y_{\eta_k}$.
(In general, this sheaf is not quasicoherent.)
\item \label{pun5}
For any quasicoherent sheaf
$\f$ on $X$ we have
$$
V_{\sigma}(\f) =
V_{\sigma}(\oo_X) \otimes_{\oo_X} \f \mbox{.}
$$
\item \label{pun7}
If all $U_i$ is affine, $0 \le i \le n$,
then for any quasicoherent
sheaf $\f$ on $X$ and any $m \ge 1$
we have
$$
H^m (X, V_{\sigma}(\f)) = 0 \mbox{.}
$$
\end{enumerate}
\end{prop}
\vspace{0.5cm}
\subsubsection{Construction of restricted adelic complex.}
We consider the standard $n$-simplex
$S= \{ S_k, \; 0 \le k \le n \}$ without degeneracy.
If $\sigma= (\eta_0, \ldots, \eta_k) \in S_k$,
then $\partial_i (\sigma)$ is the $i$th face of $\sigma$, $0 \le i \le k$.
We {\em define} a morphism of functors
$
d_i(\sigma) \quad : \quad V_{\partial_i(\sigma)} \lto V_{\sigma}$,
as the morphism that
commutes with direct limits and
coincides
on coherent sheaves with the map
\begin{equation} \label{tank}
V_{\partial_i(\sigma)}(\f) \lto V_{\sigma}(\f) \mbox{}
\end{equation}
defined by the following rules.
\begin{itemize}
\item[a)]
If $i =0$,
then (\ref{tank})
is obtained by applying the functor
$V_{\partial_0 (\sigma)}$
to the map
$$
\f \lto
(i_{\eta_0})_* (j_{\eta_0})_* (j_{\eta_0})^*
(\f / J_{\eta_0}^m \f)
$$
and passing to the projective limit with respect to $m$;
\item[b)]
If $i=1$ and $k=1$,
then we have the natural map
$$
(i_{\eta_0})_* (j_{\eta_0})_* (j_{\eta_0})^*
(\f / J_{\eta_0}^m \f)
\lto
V_{(\eta_1)} ((i_{\eta_0})_* (j_{\eta_0})_* (j_{\eta_0})^*
(\f / J_{\eta_0}^m \f)) \mbox{.}
$$
Passing to the projective limit with respect to
$m$,
we get the map~(\ref{tank}) in this case.
\item[c)]
If $i \ne 0$ and $k >1$,
then we use induction on
$k$ to get the map
$$
V_{\partial_{i-1} \cdot
(\partial_0 (\sigma))
} ((i_{\eta_0})_* (j_{\eta_0})_* (j_{\eta_0})^*
(\f / J_{\eta_0}^m \f))
\lto
V_{
\partial_0 (\sigma)
} ((i_{\eta_0})_* (j_{\eta_0})_* (j_{\eta_0})^*
(\f / J_{\eta_0}^m \f)) \mbox{.}
$$
Passing to the projective limit to $m$
we get the map~(\ref{tank})
in this case.
\end{itemize}
There is the following proposition,~\cite[prop. 3]{Os}.
\begin{prop} \label{predl3}
For any $1 \le k \le n$, $0 \le i \le k$ let
$$
d_i^k \eqdef \sum_{\sigma \in S_k} d_i(\sigma)
\quad : \quad \bigoplus_{\sigma \in S_{k-1}} V_{\sigma}
\lto
\bigoplus_{\sigma \in S_k} V_{\sigma} \mbox{.}
$$
We define the map
$$d_0^0 \quad : \quad id \lto \bigoplus_{\sigma \in S_0}
V_{\sigma} \mbox{}$$
as the direct sum of the natural maps
$\f \lto
V_{\sigma} (\f)$. (Here $id$ is the functor
of the natural imbedding of
$\QS(X)$ into $\Sh(X)$,
$\f$ is a quasicoherent sheaf on $X$,
$\sigma \in S_0$.)
Then for all $0 \le i < j \le k \le n-1$ we have
\begin{equation} \label{npp}
d_j^{k+1} d_i^k = d_i^{k+1} d_{j-1}^k \mbox{.}
\end{equation}
\end{prop}
Let
$$
d^m \eqdef \sum\limits_{0 \le i \le m} (-1)^i d^m_i
$$
Then,
given any quasicoherent sheaf $\f$ on $X$,
proposition~\ref{predl3} enables us
to construct
the complex of sheaves $V(\f)$ in the standard way:
$$
\ldots
\lto
\bigoplus_{\sigma \in S_{m-1}} V_{\sigma}(\f)
\stackrel{d^m}{\lto}
\bigoplus_{\sigma \in S_m} V_{\sigma}(\f)
\lto
\ldots \mbox{.}
$$
The property $d^{m+1} d^m = 0$ follows
from~(\ref{npp})
by an easy direct calculation.
\medskip
We have the following theorem,~\cite[th. 1]{Os}.
\begin{Th} \label{teorem1}
Let $X$
be a Noetherian separated scheme and let
$Y_0 \supset Y_1 \supset \ldots \supset Y_n$ be a flag
of closed subschemes with locally affine complements.
Assumee that $Y_0 = X$.
Then the following complex is exact:
\begin{equation} \label{kff}
0 \lto \f \stackrel{d^0}{\lto} V(\f) \lto 0 \mbox{.}
\end{equation}
\end{Th}
\proof. We give the sketch of the proof. It suffices to consider the case when the sheaf $\f$
is coherent. We consider the exact sequence of sheaves
$$
0 \lto \h \lto \f \lto (j_0)_* (j_0)^* \f \lto \g \lto 0 \mbox{.}
$$
Since the functors $V_{\sigma}$ are exact for all $\sigma$,
we obtain the following exact sequence of complexes of sheaves:
\begin{equation} \label{posledova}
0 \lto V(\h) \lto V(\f) \lto V((j_0)_* (j_0)^* \f) \lto V(\g) \lto 0 \mbox{.}
\end{equation}
The sheaves $\h$ and $\g$ are supported on $Y_1$.
Therefore by induction we may assume that the complexes
$$
0 \lto \h \stackrel{d^0}{\lto} V (\h) \lto 0 \mbox{,}
$$
$$
0 \lto \g \stackrel{d^0}{\lto} V (\g) \lto 0
$$
are already exact. The complex
$$ 0 \lto (j_0)_*(j_0)^*\f \stackrel{d^0}{\lto} V( (j_0)_* (j_0)^* \f ) \lto 0 $$
is exact, because the complex $V (j_0)_* (j_0)^* \f$ has the same components
$V_{\sigma'} (\f)$ of degrees $k$ and $k+1$, $\sigma' = (0, \eta_0, \ldots, \eta_k)$
for $\sigma = (\eta_0, \ldots, \eta_k) \in S_k$. Now the theorem follows from
sequence~(\ref{posledova}).
\vspace{0.5cm}
For any $\sigma \in S_k$ we define
$$
A_{\sigma}(\f) \eqdef H^0(X, V_{\sigma} (\f)) \mbox{.}
$$
We have the following proposition,~\cite[prop. 4]{Os}.
\begin{prop} \label{predl4}
Let $X$ be a Noetherian separated scheme,
and let
$Y_0 \supset Y_1 \supset \ldots \supset Y_n$ be a flag
of closed subschemes
such that
$U_i$ is affine, $0 \le i \le n$.
Let $\sigma \in S_k$ be arbitrary.
\begin{enumerate}
\item \label{punkt1}
Then
$A_{\sigma}$ is an exact and additive
functor: $\QS(X) \lto \Ab$.
\item If $X = \Spec A$ and
$M$ is some $A$-module,
then
$$
A_{\sigma}(\tilde{M}) = A_{\sigma} (\oo_X) \otimes_A M \mbox{.}
$$
\end{enumerate}
\end{prop}
Let $\f$ be any quasicoherent sheaf on $X$.
Applying the functor
$H^0(X, \cdot)$ to the complex $V(\f)$,
we obtain the complex $A(\f)$ of Abelian groups:
$$
\ldots \lto \bigoplus_{\sigma \in S_{m-1}} A_{\sigma}(\f)
\lto
\bigoplus_{\sigma \in S_{m}} A_{\sigma}(\f)
\lto
\ldots \mbox{.}
$$
\medskip
Now we have the following theorem, see~\cite[th. 2]{Os}.
\begin{Th} \label{teorem2}
Let $X$ be a Noetherian separated scheme. Let
$Y_0 \supset Y_1 \supset \ldots \supset Y_n$ be a flag
of closed subschemes
such that
$Y_0 = X$ and
$U_i$ is affine, $0 \le i \le n$.
Then the cohomology of the complex
$A(\f)$
coincide with that of the sheaf
$\f$ on $X$,
that is, for any $i$
$$
H^i(X, \f) = H^i(A(\f)) \mbox{.}
$$
\end{Th}
\proof
It follows from theorem~\ref{teorem1}
and assertion~\ref{pun7} of proposition~\ref{predl1}
that $V(\f)$ is
an acyclic resolution for the sheaf
$\f$.
Hence
the
cohomology of
$\f$
may be calculated
by means of global sections of this resolution.
Theorem~\ref{teorem2} is proved. \\
\medskip \\
This theorem
immediatly yields the following geometric corollary, see~\cite[th. 3]{Os}.
\begin{Th} \label{teorem3}
Let $X$ be a projective algebraic scheme
of dimension $n$ over a field.
Let
$Y_0 \supset Y_1 \supset \ldots \supset Y_n$ be
a flag of closed subschemes
such that
$Y_0 = X$ and
$Y_i$
is an ample divisor on
$Y_{i-1}$ for
$1 \le i \le n$.
Then for any quasicoherent sheaf
$\f$
on $X$ and
any $i$ we have
$$
H^i(X, \f) = H^i(A(\f)) \mbox{.}
$$
\end{Th}
\proof Since
$Y_i$ is an ample divisor on $Y_{i-1}$
for $1 \le i \le n$, we see that
$U_i$ is an affine scheme for all $0 \le i \le n-1$.
Since $ \dm Y_n =0$, we obtain that $U_n= Y_n$ is also affine.
Applying theorem~\ref{teorem2}
we complete the proof.
\smallskip
\medskip
\begin{nt} \label{Chech}
{\em
For any quasicoherent sheaf
$\f$
and any $\sigma = (\eta_0) \in S_0$,
$A_{\sigma} (\f)$
is the group of section over $U_{\eta_0}$
of the sheaf $\f$
lifted to the formal neighbourhood
of the subscheme $Y_{\eta_0}$
in $X$.
The complex
$A(\f)$ can be interpreted as
the \v{C}ech complex for this ''acyclic covering ''
of the scheme $X$.
}
\end{nt}
\begin{defin}
The complex $A (\f)$
is called restricted adelic complex on $X$ associated with flag
$
Y_0 \supset Y_1 \supset \ldots \supset Y_n
$.
\end{defin}
\begin{nt} { \em
\begin{itemize}
\item If $C$ is an algebraic curve, $Y_0 = C$ and $Y_1 = p$ is a smooth point ,
then $A(\f)$ coincides with complex~(\ref{cad}). Indeed,
$$
A_0(\f) = \Gamma (C \setminus p, \f) \mbox{,}
$$
$$
A_1 (\f) = \f \otimes_{\oo_C} \oo_{K_p} \mbox{,}
$$
$$
A_{12} (\f) = \f \otimes_{\oo_C} K_p \mbox{.}
$$
\item If $X$ is an algebraic surface, $Y_0 = C$ and $Y_1 = p$ is a smooth point
on both $C$ and $X$, then $ A(\f)$ coincides with complex~(\ref{rescom}). Indeed,
$$
A_0 (\f) = A(\f) \mbox{,}
\qquad \qquad
A_1 (\f) = A_C (\f) \mbox{,}
$$
$$
A_2 (\f) = \hat{\f}_p \mbox{,}
\qquad \qquad
A_{01} (\f) = B_C (\f) \mbox{,}
$$
$$
A_{02} (\f) = B_p (\f) \mbox{,}
\qquad \qquad
A_{12} (\f) = \hat{\f}_p \otimes {\oo}_{K_{p, \C}} \mbox{,}
$$
$$
A_{012} (\f) = \hat{\f}_p \otimes K_{p, \C} \mbox{.}
$$
\end{itemize}
}
\end{nt}
\medskip
\subsubsection{Reconstruction of restricted adelic complex.}
By propostition~\ref{predl3} we have
the natural map
$$
d_i (\sigma) \: : \:
A_{\partial_i (\sigma)} (\f) \lto A_{\sigma} (\f)
$$
for any $\sigma \in S_k$, $1 \le k \le n$,
and any $i$, $0 \le i \le k$.
Taking~(\ref{npp}) into account,
we obtain
the natural map
$$
\quad A_{\sigma_1}(\f) \lto A_{\sigma_2}(\f) \quad
$$
for any locally free sheaf $\f$
on $X$ and any $\sigma_1, \sigma_2 \in S$, $ \sigma_1 \subset \sigma_2$.
There is the following proposition, see~\cite[th. 4]{Os}.
\begin{prop} \label{tend1}
Let $X$
be a projective equidimensional
Cohen-Macaluay scheme of dimension
$n$ over a field.
Let $Y_0 \supset Y_1 \supset \ldots
\supset Y_n$ be a flag of closed subschemes
such that
$Y_0 = X$ and
$Y_i$ is an ample Cartier divisor on
$Y_{i-1}$
for
$1 \le i \le n$.
Then
the following assertions hold for any locally free sheaf $\f$ on $X$.
\begin{enumerate}
\item \label{unkt1}
The natural map
$
H^0(X, \f) \lto A_{\sigma} (\f)
$
is an embedding
for any $\sigma \in S_k$, $0 \le k \le n$.
\item \label{unkt2}
The natural map
$
\quad A_{\sigma_1}(\f) \lto A_{\sigma_2}(\f) \quad
$
is an embedding
for any locally free sheaf $\f$
on $X$ and any $\sigma_1, \sigma_2 \in S$, $ \sigma_1 \subset \sigma_2$.
\end{enumerate}
\end{prop}
\begin{nt} { \em
We note that any integral Noetherian scheme of dimension $1$
is a Cohen-Macaulay scheme.
Any normal Noetherian scheme of dimension $2$ is a Cohen-Macaulay scheme,
see~\cite[Ch. II, th. 8.22A]{Ha}.
}
\end{nt}
\bigskip
By $(0,1, \ldots, n) \in S$ we denote the unique face of dimension $n$ in $S$.
By proposition~\ref{tend1}
we can embed
$A_{\sigma_1} (\f)$
and $A_{\sigma_2} (\f)$
in $A_{(0,1 \ldots, n)} (\f)$
for any $\sigma_1, \sigma_2 \in S$
and any locally free sheaf $\f$.
Now we formulate the following theorem, see~\cite[th. 5]{Os}.
\begin{Th} \label{tend2}
Let all the hypotheses of proposition~\ref{tend1}
be satisfied.
Then the following assertions hold for any locally free sheaf $\f$
and any
$\sigma_1, \sigma_2 \in S$.
\begin{enumerate}
\item \label{pu1}
If $\sigma_1 \cap \sigma_2 = \o$,
then, inside of $A_{(0,1, \ldots, n)}$,
$$
A_{\sigma_1} (\f) \cap A_{\sigma_2} (\f) = H^0 (X, \f) \mbox{.}
$$
\item \label{pu2}
If $\sigma_1 \cap \sigma_2 \ne \o$,
then, inside of $A_{(0,1, \ldots, n)}$,
$$
A_{\sigma_1}(\f) \cap A_{\sigma_2} (\f)
= A_{\sigma_1 \cap \sigma_2} (\f) \mbox{.}
$$
\end{enumerate}
\end{Th}
\begin{nt} {\em
Theorem~\ref{tend2} is similar to the property of adelic complexes $\ad_X (\f)$ noticed
in remark~\ref{interes}.
}
\end{nt}
\vspace{0.5cm}
We assume
that the hypotheses of proposition~\ref{tend1}
hold and that
the field $k$ of definition of the scheme $X$ is $k$.
We also assume that
$Y_n=p$, where $p$ is a smooth point on each $Y_i$, $0 \le i \le n$.
Let us choose and fix local parameters
$t_1, \ldots, t_n \in
\widehat{\oo}_{p,X}$
such that
$t_{i} {|}_{Y_{i-1}} = 0$
is a local equation of the divisor
$Y_i$
in the formal neighbourhood of the point~$p$
on the scheme $Y_{i-1}$, $1 \le
i \le n$.
Let $\f$ be a rank~$1$ locally free
sheaf on $X$.
We fix a trivialization $e_p$ of $\f$
in a formal neghbourhood of the point
$p$ on $X$, that is an isomorphism
$$
e_p \quad : \quad \hat{\f}_p \lto \hat{\oo}_{p, X} \mbox{.}
$$
By the choice of the local parameters and the trivialization
we can identify
$A_{(0,1, \ldots, n)}(\f)$
with the $n$-dimensional local field $k(p)((t_n))\ldots (((t_1))$:
$$
A_{(0,1, \ldots, n)}(\f) = k(p)((t_n))\ldots (((t_1)) \mbox{.}
$$
Moreover,
we fix a set of integers $0 \le j_1 \le \ldots \le j_k \le n-1$.
Define $\sigma_{(j_1, \ldots, j_k)} \in S_{n-k}$ as the following set
$$ \left\{i \: : \: 0 \le i \le n, \, i \ne j_1, \, \ldots,
\,
i \ne j_k \right\} \mbox{.} $$
By proposition~\ref{tend1} we have a natural embedding
$$A_{\sigma_{(j_1, \ldots, j_k)}} (\f) \lto A_{(0,1, \ldots, n)}(\f) \mbox{.}$$
When we identify
$A_{(0,1, \ldots, n)}(\f)$ with the field
$k(p)((t_n)) \ldots ((t_1))$,
the space
$A_{\sigma_{(j_1, \ldots, j_k)}} (\f)$
corresponds to the following
$k$-subspace in
$k(p)((t_n)) \ldots ((t_1))$:
\begin{equation} \label{st}
\left\{ \sum a_{i_1, \ldots, i_n} t_n^{i_n}
\ldots t_1^{i_1 } \; : \; a_{i_1, \ldots, i_n} \in k(p), \,
i_{j_1+1} \ge 0, \, i_{j_2+1} \ge 0, \, \ldots, i_{j_k +1} \ge 0
\right\} \mbox{.}
\end{equation}
Thus, by theorem~\ref{tend2},
to determine the images of
$A_{\sigma} (\f)$
in $k(p)((t_n)) \ldots ((t_1))$
(for any $\sigma \in S$),
it suffices to know only one image of
$A_{(0,1, \ldots, n-1)}$
in $k(p)((t_n)) \ldots ((t_1))$.
(All the others are obtained by taking
the intersection of the image of this one
with the standard subspaces~(\ref{st})
of $k(p)((t_n)) \ldots ((t_1))$.)
It is clear that these arguments
are generalized immediately to locally free sheaves
$\f$ of rank $r$
and to the spaces $k(p)((t_n)) \ldots ((t_1))^{\oplus r}$.
These arguments lead to the following theorem, which enables to reconstruct restricted adelic complex $A (\f)$, see also~\cite[th. 6]{Os}.
\begin{Th}
Let all the hypotheses of proposition~\ref{tend1}
be satisfied.
We also assume that
$Y_n=p$, where $p$ is a smooth point on each $Y_i$, $0 \le i \le n$.
Let $\f$ be a locally free sheaf on $X$.
Then the subspace
$$
A_{(0,1, \ldots, n-1)} (\f)
\;
\subset
\;
A_{(0,1, \ldots, n)}(\f)
$$
uniquely determines the restricted adelic complex $A (\f)$.
\end{Th}
\section{Reciprocity laws.}
Let $L$ be a field of discrete valuation $\nu_L$ with the valuation ring $\oo_L$
and the maximal idela $m_L$.
Then there is the tame symbol
\begin{equation} \label{tame}
(f,g)_L = (-1)^{\nu_L(f)\nu_L(g)}
\frac{f^{\nu_L(g)}}{g^{\nu_L(f)}} \; \mod \; m_L \mbox{,}
\end{equation}
where $f$, $g$ are from $L^*$.
There is the following reciprocity law, see, for example, \cite{S}.
\begin{prop}
Let $C$ be a complete smooth algebraic curve over $k$.
For any $f, g \in k(C)^*$ in the following product only a finite number
of terms is not equal to $1$ and
\begin{equation} \label{curver}
\prod_{p \in C} \nm\nolimits_{k(p)/k} \, (f, g)_{K_p} =1 \mbox{,}
\end{equation}
where the product is taken over all closed points $p \in C$,
and $\nm$ is the norm map.
\end{prop}
Let $K $ be a $2$-dimensional local field with the last residue field $k$.
We have a discrete valuation of rank~$2$ on $K$:
$$(\nu_1, \nu_2) : K^* \to \dz \oplus \dz \mbox{.}$$
Here $\nu_1 = \nu_K$ is the discrete
valuation of the filed $K$, and
$$\nu_2 (b) \eqdef \nu_{\bar{K}} ( \: \overline{b t_1^{-\nu_1(b)}} \: ) \mbox{,} $$
where $\nu_1 (t_1) =1$.
We note that $\nu_2$ depends on the choice of local parameter $t_1$.
Let $m_K$ be the maximal ideal of $\oo_K$, and
$m_{\bar{K}}$ be the maximal ideal of $\oo_{\bar{K}}$.
We define a map:
$$
\nu_K (\; , \;) \quad : \quad K^* \times K^* \lto \dz
$$
as the composition of maps:
$$
K^* \times K^* \lto K_2(K) \stackrel{\partial_2}{\lto} \bar{K}^* \stackrel{\partial_1}{\lto} \dz \mbox{,}
$$
where $\partial_i$ is the boundary map in algebraic $K$-theory. The map $\partial_2$ coincides with tame symbol~(\ref{tame}) with respect to
discrete valuation $\nu_1$. The map $\partial_1$ coincides with the discrete valuation $\nu_{\bar{K}}$.
We define a map:
$$
(\;, \;, \;)_K
\quad : \quad
K^* \times K^* \times K^* \lto k^*
$$
as the composition of maps
$$
K^* \times K^* \times K^* \lto K_3^M (K) \stackrel{\partial_3}{\lto} K_2(\bar{K})
\stackrel{\partial_2}{\lto} k^* \mbox{,}
$$
where
$K_3^M$ is the Milnor $K$-group.
There are the following explicit expressions for these maps (see~\cite{FP}):
$$
\nu_K (f, g) = \nu_1(f) \nu_2(g) - \nu_2(f) \nu_1(g)
$$
$$
(f,g,h)_K = \sign\nolimits_K(f,g,h)
f^{\nu_K (g, h)} g^{\nu_K (h, f)} h^{\nu_K (f, g)} \mod m_K
\mod\nolimits m_{\bar{K}}
$$
$$
\sign\nolimits_K(f,g,h) = (-1)^B \mbox{,}
$$
where $
B =
\nu_1(f) \nu_2(g) \nu_2(h)
+ \nu_1(g) \nu_2(f) \nu_2(h)
+ \nu_1(h) \nu_2(g) \nu_2(f)
+ \nu_2(f) \nu_1(g) \nu_1(h)
+ \nu_2(g) \nu_1(f) \nu_1(h)
+ \nu_2(h) \nu_1(f) \nu_1(g)
$.
\begin{prop}
For any $f,g,h \in K^*$
$$
\sign\nolimits_K(f,g,h) =
(-1)^A \qquad \mbox{, where}
$$
$$
A = {\nu_K (f,g) \nu_K (f,h)
+ \nu_K (f,g) \nu_K (f,h)
+ \nu_K (f,g) \nu_K (f,h)
+ \nu_K (f,g) \nu_K (f,h) \nu_K (g,h)
} \mbox{.}
$$
\end{prop}
\proof It follows from direct calculations modulo $2$ with $A$ and $B$ using the explicit expressions above.
\vspace{0.7cm}
Let $X$ be a smooth algebraic surface over $k$.
We recall the follwoing notations from section~\ref{surface}.
If $C$ is a curve on $X$, then $K_C$ is the completion of the field
$k(X)$ along the discrete valuation given by the irreducible curve $C$ on $X$.
If $p$ is a point on $X$, then $K_p$ is a subring in
$ \Frac(\hat{\oo}_{p, X} )$ generated by subrings $k(X)$ and $\hat{\oo}_{p,X}$.
There are the following reciprocity laws, see~\cite{FP}.
\begin{Th}
\begin{enumerate}
\item We fix a point $p \in X$ and any $f, g, h \in K_p^*$.
Then in the following sum only a finite number of terms is
not equal to $0$ and
$$
\sum_{ X \supset \C \ni p} \nu_{K_{p,\C}} (f,g) = 0 \mbox{.}
$$
In the following product only a finite number of terms is not equal to $1$ and
$$
\prod_{X \supset \C \ni p} (f,g,h)_{K_{p, \C}} = 1 \mbox{,}
$$
where the sum and the product are taken over all germs of irredusible
curves on $X$ at $p$.
\item
We fix an irreducible projective curve $C$ on $X$ and any $f,g,h \in K_C^*$.
Then in the following sum only a finite number of terms is
not equal to $0$ and
$$
\sum_{ p \in \C \subset X } [k(p) : k] \cdot \nu_{K_{p,\C}} (f,g) = 0 \mbox{.}
$$
In the following product only a finite number of terms is not equal to $1$ and
$$
\prod_{ p \in \C \subset X} \nm\nolimits_{k(p) /k} \, (f,g,h)_{K_{p, \C}} = 1 \mbox{,}
$$
where the sum and the product are taken over all points $p \in C$ and all germs of irredusible
curve $C$ at $p$.
\end{enumerate}
\end{Th}
\begin{nt} {\em
The relative reciprocity laws were constructed in~\cite{Os0} (see also \cite{Os00} for a
short exposition)
for a smooth projective morphism $f$ of a smooth algebraic surface $X$ to
a smooth algebraic curve $S$ when $char k = 0$.
If $p \in \C \subset X$ then explicit formulas were constructed in~\cite{Os0} for
maps
$$
K_2 (K_{p, \C}) \lto K_{f(p)}^* \mbox{.}
$$
}
\end{nt}
\begin{nt} {\em
The map $\nu_K (\;, \;)$ for $2$-dimensional local field $K$
was interpreted in~\cite{Os1} as the commutator of liftings of elements $f, g \in K^*$
in a central extension of group $K^*$ by $\dz$. From this interpretation
the reciprocity laws for $\nu_K (\; , \;)$ were proved.
The proof in~\cite{Os1} uses adelic rings on an algebraic surface $X$.
This is an abstract version of reciprocity law for $\nu_K(\; , \;)$ like abstract
version of reciprocity law~(\ref{curver}) for a projective curve in~\cite{Arbar} (and in~\cite{T} for the residues of differentials on a projective curve).}
\end{nt}
\begin{nt} {\em
We don't describe
the reciprocity laws for residues of differentials of $2$-dimensional local fields, which were
formulated and proved in~\cite{P1}, see also~\cite{Y}. }
\end{nt}
\begin{nt} {\em
The symbols $\nu_K (\; , \;)$ and $(\;, \;, \;)_K$
correspond to the non-ramified and tame ramified extensions of $2$-dimensional local fields
when the last residue field is finite, see~\cite{P}.
}
\end{nt} | {"config": "arxiv", "file": "math0508205.tex"} |
\section{Intersection of Multiplicative Groups of Complex Roots of Unity}
Tags: Multiplicative Groups of Complex Roots of Unity, Circle Group
\begin{theorem}
Let $\struct {K, \times}$ denote the [[Definition:Circle Group|circle group]].
Let $m, n \in \Z_{>0}$ be [[Definition:Strictly Positive Integer|(strictly) positive integers]].
Let $c = \lcm \set {m, n}$ be the [[Definition:Lowest Common Multiple of Integers|lowest common multiple]] of $m$ and $n$.
Let $\struct {U_n, \times}$ denote the [[Definition:Multiplicative Group of Complex Roots of Unity|multiplicative group of complex $n$th roots of unity]].
Let $\struct {U_m, \times}$ denote the [[Definition:Multiplicative Group of Complex Roots of Unity|multiplicative group of complex $m$th roots of unity]].
Let $H = U_m \cap U_n$.
Then $H = U_c$.
\end{theorem}
\begin{proof}
{{ProofWanted|This is (probably) a specialisation of a more general result on cyclic groups.}}
\end{proof}
| {"config": "wiki", "file": "thm_16539.txt"} |
TITLE: How to calculate this angle from 2 points in 3d space?
QUESTION [0 upvotes]: How do I find the following angle $a$ given $2$ points $(x, y, z)$ in $3$-dimensional space?
I've drawn $2$ points, one in green, one in red. The curved black line being the earth, and the normal vector is perpendicular to the earth. Green point is on the ground, and red point will always be above ground. Can assume earth is centered at $(0, 0, 0)$
Edit: can assume red vector is pointing in opposite direction than drawn.
REPLY [1 votes]: Let $\vec{p}_r = (x_r, y_r, z_r)$ be the red point, and $\vec{p}_g = (x_g, y_g, z_g)$ be the green point. The direction unit vectors are then
$$\hat{r} = \frac{\vec{p}_r - \vec{p}_g}{\left\lVert \vec{p}_r - \vec{p}_g \right\rVert}, \quad
\hat{g} = \frac{\vec{p}_g}{\left\lVert \vec{p}_g \right\rVert}$$
where $\lVert\vec{p}\rVert = \sqrt{\vec{p} \cdot \vec{p}} = \sqrt{x^2 + y^2 + z^2}$.
The angle $\theta$ between the two unit vectors fulfills
$$\begin{aligned}
\cos\theta &= \hat{r} \cdot \hat{g} \\
\sin\theta &= \left\lVert \hat{r} \times \hat{g} \right\rVert \\
\end{aligned}$$
and usually you use $\theta = \arccos(\hat{r} \cdot \hat{g})$. | {"set_name": "stack_exchange", "score": 0, "question_id": 4038795} |
TITLE: How do I maximize $|t-e^z|$, for $z\in D$, the unit disk?
QUESTION [4 upvotes]: I guess this question doesn't have a closed form solution for all $t\in \Bbb C$, but I know one for $t=1$ provided by Daniel Fischer in a question I asked.
$$\begin{align}
\left\lvert e^w-1\right\rvert &= \left\lvert \sum_{m=1}^\infty \frac{w^m}{m!}\right\rvert\\
&\leqslant \sum_{m=1}^\infty \frac{\lvert w\rvert^m}{m!}\\
&= e^{\lvert w\rvert}-1,
\end{align}$$
with equality for $w \geqslant 0$.
So this $|1-e^z|$ maximized by $z=1$. I tried doing the same for other values of $t$, but without success. Here is an attempt for $t=8$
$$\begin{align}
\left\lvert e^w-8\right\rvert &= \left\lvert \sum_{m=1}^\infty \frac{w^m}{m!} -7\right\rvert\\
&\leqslant \sum_{m=1}^\infty \frac{\lvert w\rvert^m}{m!} + 7\\
&= e^{\lvert w\rvert}+6,
\end{align}$$
But I don't get equality for $w \geqslant 0$. Is there a way to maximize $|t-e^z|$ for $z\in D$, the unit disk, for other values of $t$ besides $0,1$?
REPLY [0 votes]: This is not a full answer, I just wanted to show the plot of the curve $\gamma: \phi \mapsto \exp(\exp(i\phi))$. The solution to the question for a given $t \in {\bf C}$ is given by a point(s) on the curve that are farthest from $t$. There are few observations one can make immediately.
when $t \in {\bf R}$ there are four candidate points, $\phi = 0$, $\phi = \pi$ and two conjugate points, such that the normal to the curve at those point intersects $t$. These are candidate solutions. It seems that when $t < 1$, the solution is $\phi = 0$, when $t > 2$ the solution is $\phi = \pi$, while for $ 1 \leq t \leq 2$ (especially at $t=1.5$) there might be a non-trivial solution $\phi \neq 0, \pi$. | {"set_name": "stack_exchange", "score": 4, "question_id": 638142} |
TITLE: Arrangement of $4$ couples in $2$ rows with no couple in same column
QUESTION [2 upvotes]: There are $8$ chairs in $2$ rows one behind the other. $4$ couples are to be seated on these chairs. while arranging them we found that no husband is sitting in front or behind his wife. The total such arrangements are $C(8,4)\times k$. Find $k$.
I am trying to solve it by making cases.
Case I : When no couple occupies same row
So we will select one each from $4$ couples in $16$ ways and then arrange them in $4!$ ways and in second row we do derangement of $4$ people. So the answer from case $\mathbf{Case\;I}$ is
$$16\cdot4!\cdot9$$
Case II : $2$ couples occupy same row
In this case we choose $2$ couples in $C(4,2)$ ways and arrange them in $4!$ ways and arrangement remaining $4$ in second row. So the answer from case $\mathbf{Case\;II}$ is
$$C(4,2)\cdot4!\cdot4!$$
Case III : When each row contains exactly one couple
This is the case which I am not able to calculate.
Could someone help me with case or suggest an alternate and more efficient approach to tackle this problem?
REPLY [2 votes]: I will not repeat what you have already done in case $1$ and $2$. They are correct. Coming to case $3$, when front row seats exactly one couple (and so the other row will seat exactly one couple as well).
There are $4$ ways to choose a couple for the front row.
Number of ways to choose rest two for the front row is,
$\displaystyle {6 \choose 2} - {3 \choose 1} = 12$
That is to choose $2$ people from remaining $6$ people but subtracting number of ways of having chosen a couple.
Now we need to ensure that none of the two in the back row who are not couple are seated behind their spouses.
For every arrangement of front row, number of arrangements in the back row are -
Number of arrangements where both are seated behind their spouses is $2$ (as couple seating in the back row can swap places).
Number of arrangements where exactly one of them is seated behind their spouses is $2 \cdot 2 \cdot 2 = 8$ (once one of the non-couple is seated behind spouse, the other non-couple has $2$ places to choose from, except the place behind spouse and then like before, couple in the back row can be seated in remaining two places in $2$ ways).
So favorable arrangements are $4! - (2 + 8) = 14$
Or simply by Principle of Inclusion Exclusion,
$4! - 2 \cdot 3! + 2! = 14$
So number of arrangements for Case $3$ is,
$\displaystyle 4 \cdot 12 \cdot 4! \cdot 14 = 16128$
Total number of arrangements is $2 \cdot 3456 + 16128 = 23040$. | {"set_name": "stack_exchange", "score": 2, "question_id": 4183757} |
TITLE: Renormalisation and the Fisher-Rao metric
QUESTION [16 upvotes]: The renormalisation group (I'm talking about classical, statistical physics here, I'm not familiar with field theory too much) can be thought of as a flux in a space of possible Hamiltonians for a system. For example, in the Kadanoff picture, if I understand correctly, every step of "scaling up" by averaging across a spin block sends me to a new point of my parameter space, with (possibly) different values of the field and interaction strength. Critical points and certain other points are fixed points of this transformation. So if my family of Hamiltonians is indicated by $H_\theta(X)$, where $X$ is the state of my system and $\theta$ is a vector of parameters, the renormalisation process is a flux in the $\theta$ space. See for example Fisher, M. E. (1998), Rev. Mod. Phys., in particular figure 4.
On the other hand, I have learned in a completely different context that this space of parameters has an interesting metric, the Fisher-Rao metric, which is defined as
$$g_{ij}=-\left\langle\frac{\partial \log p(x,\theta)}{\partial \theta_{i}}\frac{\partial \log p(x,\theta)}{\partial \theta_{j}}\right\rangle,$$
where $p$ is a probability distribution, and the average is taken with respect of $p$ itself. In the canonical ensemble formalism, take
$$ p(x, \theta) = \frac{e^{-H_\theta(x)}}{Z_\theta}.$$
This connects it to what was said above.
Now, my question is the following: does this metric say anything useful about the renormalisation flux? Maybe its geodesics have anything to do with it?
Why do I think there is something in common? Because $g_{ij}$ diverges at critical points, for reasons that would be long to explain here. A general idea is, from the physical point of view, that the specific heat, the magnetic susceptibility (for example in an Ising-like case) are entries of the F.I. tensor. From the statistical point of view, critical points are points from which even an infinitesimal deviation in the parameter space leads to a finite change in the parametrised probability distribution.
If this is an interesting idea, please don't steal it. Thanks.
REPLY [0 votes]: Hamiltonian characterizes a probability distribution function. And so does Fisher information. The two are connected obviously.
Renormalization offers ways to view the data at different scales.
Hence with renormalization, Hamiltonian or Fisher information would change accordingly. | {"set_name": "stack_exchange", "score": 16, "question_id": 329234} |
TITLE: Mapping $U$ homeomorphically onto $f(U)$.
QUESTION [3 upvotes]: I've got a rather elemental doubt about what "mapping a subspace homeomorphically onto its image" means, but I'd like to be sure what is being implied.
Let $f:X\to Y$ a continuous injective map between topological spaces. Let $U\subseteq X$ be an open subspace. Then we could say in some context that $f$ maps $U$ homeomorphically onto $f(U)$.
My question is
Must $f(U)$ be an open subspace of $Y$?
If we consider $f$ as a map $f:U\to f(U)$ (its restriction to $U$), then $f(U)$ is open in itself, so there is no contradiction with $f$ being a homeomorphism between $U$ and $f(U)$ if $f(U)$ wasn't open in $Y$.
REPLY [2 votes]: Not always. E.g. $x \to (x,0)$ from $\mathbb{R}$ into $\mathbb{R}^2$.
We consider $f[U]$ as a space in its own right, with the subspace topology and $U$ as well, and $f$ is a homeomorphism between those two spaces.
REPLY [2 votes]: Not always. Consider $f: \mathbb{R} \to \mathbb{R}^2$ defined by $f(x)=(x,0)$. This is continuous and injective, but does not map open sets to open sets. | {"set_name": "stack_exchange", "score": 3, "question_id": 2714660} |
TITLE: Prove that $EE' \perp BC$.
QUESTION [3 upvotes]: $BB'$ and $CC'$ are altitudes of $\triangle ABC$. $BD$ and $CD$ are tangents of the circumscribed of $\triangle ABC$. $DD' \perp BC$ at $D'$. $AD \cap BC = \{E\}$ and $AD' \cap B'C' = \{E'\}$. Prove that $EE' \perp BC$.
I tried $BB' \cap CC' = \{H\}$ and prove that $AH \parallel EE'$, although I haven't known if there are any ways possible.
REPLY [2 votes]: Firstly, it's clear that $D'$ is midpoint of segment $BC$. Then, note that $D'B'=D'C'=D'B=D'C$ (points $B,C,B',C'$ lie on the circle with diameter $BC$). From equalities $\angle B'BC'=90^{\circ}-\angle A$ and $\angle B'D'C'=2\angle B'BC'$ we obtain $\angle B'D'C'=180^{\circ}-2\angle A$. Therefore, (from $D'B'=D'C'$) we get $\angle D'B'C'=\angle D'C'B'=\angle A=\angle B'AC'$. Hence, lines $D'B'$ and $D'C'$ are tangents to circumcircle of triangle $AB'C'$. Now, note that triangles $AB'C'$ and $ABC$ are similar, so points $D'$ and $D$, respectively, are corresponding each other in these triangles. Also $E=AD\cap BC$ and $E'=AD'\cap B'C'$, so construction $(A,B,C,D,E)$ is similar to $(A,B',C',D',E')$. It means that $\frac{AE'}{AD'}=\frac{AE}{AD}$. The last equality implies that $EE'\parallel DD'$. But $DD'\perp BC$, so $EE'\perp BC$, as desired. | {"set_name": "stack_exchange", "score": 3, "question_id": 3158193} |
TITLE: Modular arithmetic proof - discrete mathematics
QUESTION [0 upvotes]: It is known that an integer $a$ divides the sum and the difference of two integers $n$ and $m$, namely $a\mid(n + m)$ and $a\mid(n − m)$. Does it follow that $a$ divides $n$, if it is also known that:
$a$ is even
$a$ is odd
How do I approach/solve this? I'm at a loss with how to even begin.
REPLY [1 votes]: If $a$ is even them it does not hold, e.g. assume $a=2$, $n=3$ and $m=1$. Then $a|(n+m)$ and $a|(n-m)$ but $a$ does not divide $n$ or $m$
Assume $a$ is odd, i.e. $a=2k+1$. Then $a|(n+m)$ and $a|(n-m)$ $\Rightarrow$ $a|2n$ and $a|2m$.Since $a$ cannot divide $2$ it must divide $n$. | {"set_name": "stack_exchange", "score": 0, "question_id": 2281208} |
TITLE: The wavelength of a photon after pair production
QUESTION [0 upvotes]: The book that I am using says that the total energy of an electron positron pair before they happen to collide is equal to $$ 2mc^² + 2K.E$$ where K.E is kinetic energy. They then say that this energy will be transformed into energy of two photons which have Sacha. Wavelength of:
$$\lambda = hc/mc^2 +K.E.$$ where h is Planck's constant and c is the speed of light. The thing is I think there is a typo here and the equation should be instead:
$$\lambda = hc/(mc^2 +K.E.)$$. Is that right?
REPLY [1 votes]: Yeah that is right. The equation in the book has wrong dimensions.
hc/(mc^2) has dimensions of length, while KE has dimensions of Energy, you can't add them.
The equation you wrote is Correct. | {"set_name": "stack_exchange", "score": 0, "question_id": 380680} |
\section{Bernoulli's Theorem}
Tags: Probability Theory
\begin{theorem}
Let the [[Definition:Probability|probability]] of the [[Definition:Occurrence of Event|occurrence]] of an [[Definition:Event|event]] be $p$.
Let $n$ [[Definition:Independent Events|independent trials]] be made, with $k$ [[Definition:Success|successes]].
Then:
:$\displaystyle \lim_{n \mathop \to \infty} \frac k n = p$
\end{theorem}
\begin{proof}
Let the [[Definition:Random Variable|random variable]] $k$ have the [[Definition:Binomial Distribution|binomial distribution]] with parameters $n$ and $p$, that is:
:$k \sim \Binomial n p$
where $k$ denotes the number of [[Definition:Success|successes]] of the $n$ [[Definition:Independent Events|independent trials]] of the event with [[Definition:Probability|probability]] $p$.
From [[Expectation of Binomial Distribution]]:
:$\expect k = n p \leadsto \dfrac 1 n \expect k = p$
[[Linearity of Expectation Function]] gives:
:$ \expect {\dfrac k n} = p =: \mu$
Similarly, from [[Variance of Binomial Distribution]]:
:$\var k = n p \paren {1 - p} \leadsto \dfrac 1 {n^2} \var k = \dfrac {p \paren {1 - p} } n$
From [[Variance of Linear Combination of Random Variables]]:
:$\var {\dfrac k n} = \dfrac {p \paren {1 - p} } n =: \sigma^2$
By applying [[Chebyshev's Inequality]] to $\dfrac {k} {n}$, we have for any $l>0$:
:$\map \Pr {\size {\dfrac k m - \mu} \ge l \sigma} \le \dfrac 1 {l^2}$
Now, let $\epsilon > 0$ and choose $l = \dfrac \epsilon \sigma$, to get:
:$\map \Pr {\size {\dfrac k m - \mu} \ge \dfrac \epsilon \sigma \cdot \sigma} \le \dfrac {\sigma^2} {\epsilon^2}$
Simplifying and plugging in the values of $\mu$ and $\sigma^2$ defined above yields:
:$\map \Pr {\size {\dfrac k n - p} \ge \epsilon} \le \dfrac {p \paren {1 - p} } {n \epsilon^2}$
Scaling both sides by $-1$ and adding $1$ to both sides yields:
:$1 - \map \Pr {\size {\dfrac k n - p} \ge \epsilon} \ge 1 - \dfrac {p \paren {1 - p} } {n \epsilon^2}$
Applying [[Union of Event with Complement is Certainty]] to the left hand side:
:$\map \Pr {\size {\dfrac k n - p} \le \epsilon} \ge 1 - \dfrac {p \paren {1 - p} } {n\epsilon^2}$
Taking the limit as $n$ approaches infinity on both sides, we have:
:$\displaystyle \lim_{n \mathop \to \infty} \map \Pr {\size {\frac k n - p} < \epsilon} = 1$
{{qed}}
\end{proof}
| {"config": "wiki", "file": "thm_11524.txt"} |
\begin{document}
\author[1] {Basudeb Datta}
\author[2] { Dipendu Maity}
\affil[1]{Department of Mathematics, Indian Institute of Science, Bangalore 560\,012, India.
dattab@iisc.ac.in.}
\affil[2]{Department of Sciences and Mathematics,
Indian Institute of Information Technology Guwahati,
Bongora, Assam 781\,015, India.
dipendu@iiitg.ac.in.}
\title{Platonic solids, Archimedean solids and semi-equivelar maps on the sphere}
\date{January 30, 2020}
\maketitle
\vspace{-10mm}
\begin{abstract}
A vertex-transitive map $X$ is a map on a surface on which the automorphism group of $X$ acts transitively on the set of vertices of $X$. If the face-cycles at all the vertices in a map are of same type then the map is called a semi-equivelar map. Clearly, a vertex-transitive map is semi-equivelar. Converse of this is not true in general. In particular, there are semi-equivelar maps on the torus, on the Klein bottle and on the surfaces of Euler characteristics $-1$ $\&$ $-2$ which are not vertex-transitive.
It is known that the boundaries of Platonic solids, Archimedean solids, regular prisms and antiprisms are vertex-transitive maps on $\mathbb{S}^2$. Here we show that there is exactly one semi-equivelar map on $\mathbb{S}^2$ which is not vertex-transitive. More precisely, we show that a semi-equivelar map on $\mathbb{S}^2$ is the boundary of a Platonic solid, an Archimedean solid, a regular prism, an antiprism or the pseudorhombicuboctahedron. As a consequence, we show that all the semi-equivelar maps on $\mathbb{RP}^2$ are vertex-transitive. Moreover, every semi-equivelar map on $\mathbb{S}^2$ can be geometrized, i.e., every semi-equivelar map on $\mathbb{S}^2$ is isomorphic to a semi-regular tiling of $\mathbb{S}^2$. In the course of the proof of our main result, we present a combinatorial characterization in terms of an inequality of all the types of semi-equivelar maps on $\mathbb{S}^2$. Here, we present self-contained combinatorial proofs of all our results.
\end{abstract}
\noindent {\small {\em MSC 2020\,:} 52C20, 52B70, 51M20, 57M60.
\noindent {\em Keywords:} Polyhedral maps on sphere; Vertex-transitive maps; Semi-equivelar maps; Semi-regular tilings; Platonic solids; Archimedean solids.}
\section{Introduction}
By a map we mean a polyhedral map on a surface. So, a face of a map is a $n$-gon for some $n\geq 3$ and two intersecting faces intersect either on a vertex or on an edge. A map on a surface is also called a {\em topological tiling} of the surface. If all the faces of a map are triangles then the map is called {\em simplicial}. A map $X$ is said to be {\em vertex-transitive} if the automorphism group of $X$ acts transitively on the set of vertices of $X$. In \cite{lutz1999}, Lutz found all the (77 in numbers) vertex-transitive simplicial maps with at most $15$ vertices.
For a vertex $u$ in a map $X$, the faces containing $u$ form a cycle (called the {\em face-cycle} at $u$) $C_u$ in the dual graph of $X$. So, $C_u$ is of the form $(F_{1,1}\mbox{-}\cdots \mbox{-}F_{1,n_1})\mbox{-}\cdots\mbox{-}(F_{k,1}\mbox{-}\cdots \mbox{-}F_{k,n_k})\mbox{-}F_{1,1}$, where $F_{i,\ell}$ is a $p_i$-gon for $1\leq \ell \leq n_i$, $1\leq i \leq k$, $p_r\neq p_{r+1}$ for $1\leq r\leq k-1$ and $p_n\neq p_1$. A map $X$ is called {\em semi-equivelar} if $C_u$ and $C_v$ are of same type for all $u, v \in V(X)$. More precisely, there exist integers $p_1, \dots, p_k\geq 3$ and $n_1, \dots, n_k\geq 1$, $p_i\neq p_{i+1}$ (addition in the suffix is modulo $k$) such that $C_u$ is of the form as above for all $u\in V(X)$. In such a case, $X$ is called a semi-equivelar (or {\em semi-regular}) map of vertex-type $[p_1^{n_1}, \dots, p_k^{n_k}]$ (or, a map of type $[p_1^{n_1}, \dots, p_k^{n_k}]$). (We identify a cyclic tuple $[p_1^{n_1}, p_2^{n_2}, \dots, p_k^{n_k}]$ with $[p_k^{n_k}, \dots, p_2^{n_2}, p_1^{n_1}]$ and with $[p_2^{n_2}, \dots, p_k^{n_k}, p_1^{n_1}]$.) Clearly, vertex-transitive maps are semi-equivelar.
There are eleven types of semi-equivelar maps on the torus and all these are quotients of Archimedean tilings of the plane (\cite{DM2017}, \cite{DM2018}). Among these 11 types, 4 types (namely, of vertex-types $[3^6]$, $[6^3]$, $[4^4]$, $[3^3, 4^2]$) of maps are always vertex-transitve and there are infinitely many such examples in each type (\cite{Ba1991}, \cite{DM2017}). For each of the other seven types, there exists a semi-equivelar map on the torus which is not vertex-transitive (\cite{DM2017}). Although, there are vertex-transitive maps of each of these seven types also (\cite{Ba1991}, \cite{Su2011t}, \cite{Th1991}). Similar results are known for Klein bottle (\cite{Ba1991}, \cite{DM2017}, \cite{Su2011kb}). If the Euler characteristic $\chi(M)$ of a surface $M$ is negative then the number of semi-equivelar maps on $M$ is finite and at most $-84\chi(M)$ (\cite{Ba1991}).
Nine examples of non-vertex-transitive semi-equivelar maps on the surface of Euler characteristic $-1$ are known (\cite{TU2017}). There are exactly three non vertex-transitive semi-equivelar simplicial maps on the orientable surface of genus 2 (\cite{DU2006}).
A {\em semi-regular tiling} of a surface $S$ of constant curvature (\textit{eg.}, the round sphere, the Euclidean plane or the hyperbolic plane) is a semi-equivelar map on $S$ in which each face is a regular polygon and each edge is a geodesic. It follows from the results in \cite{DG2020} that there exist semi-regular tilings of the hyperbolic plane of infinitely many different vertex-types. It is also shown that there exists a unique semi-regular tiling of the hyperbolic plane of vertex-type $[p^q]$ for each pair $(p,q)$ of positive integers satisfying $1/p+1/q<1/2$. Moreover, these tilings are vertex-transitive.
All vertex-transitive maps on the 2-sphere $\mathbb{S}^2$ are known. These are the boundaries of Platonic solids, Archimedean solids and two infinite families (\cite{Ba1991}, \cite{GS1981}). Other than these, there exists a semi-equivelar map on $\mathbb{S}^2$, namely, the boundary of the pseudorhombicuboctahedron (\cite{Gr2009}, \cite{wiki}). It is known that quotients of ten centrally symmetric vertex-transitive maps on $\mathbb{S}^2$ (namely, the boundaries of icosahedron, dodecahedron and eight Archimedean solids) are all the vertex-transitive maps on the real projective plane $\mathbb{RP}^2$ (\cite{Ba1991}). Here we show that these are also all the semi-equivelar maps on $\mathbb{RP}^2$. We prove
\begin{theorem} \label{thm:s2}
Let $X$ be a semi-equivelar map on $\mathbb{S}^2$. Then, up to isomorphism, $X$ is the boundary of a Platonic solid, an Archimedean solid, a regular prism, an antiprism or the pseudorhombicuboctahedron.
\end{theorem}
\begin{theorem} \label{thm:rp2}
If $Y$ is a semi-equivelar map on $\mathbb{RP}^2$ then the vertex-type of $Y$ is $[5^3]$, $[3^5],$ $[4^1, 6^2],$ $[3^1, 5^1, 3^1, 5^1], [3^1, 4^3],$ $[4^1, 6^1, 8^1], [3^1, 4^1, 5^1, 4^1], [4^1, 6^1, 10^1], [3^1, 10^2]$ or $[5^1, 6^2]$. Moreover, in each case, there exists a unique semi-equivelar map on $\mathbb{RP}^2$.
\end{theorem}
\begin{corollary} \label{cor:s2vt}
The boundary of the pseudorhombicuboctahedron is not vertex-transitive and all the other semi-equivelar maps on $\mathbb{S}^2$ are vertex-transitive.
\end{corollary}
As consequences we get
\begin{corollary} \label{cor:tiling}
$(a)$ Each semi-equivelar map on $\mathbb{S}^2$ is isomorphic to a semi-regular tiling of $\mathbb{S}^2$. $(b)$ Each semi-equivelar map on $\mathbb{RP}^2$ is isomorphic to a semi-regular tiling of $\mathbb{RP}^2$.
\end{corollary}
\begin{corollary} \label{cor:rp2vt}
All the semi-equivelar maps on $\mathbb{RP}^2$ are vertex-transitive.
\end{corollary}
\newpage
\section{Examples} \label{example}
Here are examples of twenty two known $3$-polytopes.
\vspace{-4mm}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=.5]{platonicsolids}
\vspace{-2mm}
\caption*{Figure 1: Platonic Solids (from \cite{mathfun})}
\end{center}
\end{figure}
\vspace{-8mm}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=.52]{ArchimedeanSolids}
\vspace{-2mm}
\caption*{Figure 2: Archimedean Solids (from \cite{mathfun})}
\end{center}
\end{figure}
\vspace{-8mm}
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=.25]{Antiprism1}
\caption*{Figure 3: Pseudorhombicuboctahedron, Prisms $P_3$, Drum $P_8$, Antiprism $Q_8$ (from \cite{Gr2009}, \cite{mathetc})}
\end{center}
\end{figure}
\vspace{-5mm}
If $P$ is one of the first nineteen of above twenty two polytopes then (i) all the vertices of $P$ are points on a sphere whose centre is same as the centre of $P$, (ii) each 2-face of $P$ is a regular polygon and (iii) lengths of all the edges of $P$ are same. Without loss, we assume that (iv) the vertices of $P$ are points on the unit 2-sphere with centre $(0,0,0)$.
For $n\geq 3$, let $P_n$ be the polytope whose vertex-set is
\begin{align*}
\left\{(1+\sin^2\frac{\pi}{n})^{-\frac{1}{2}}\left(\cos\frac{2m\pi}{n}, \sin\frac{2m\pi}{n},
\pm\sin\frac{\pi}{n} \right) : 0\leq m\leq n-1\right\}.
\end{align*}
The polytope $P_4$ is a cube and, for $n\neq 4$, the boundary of $P_n$ consists of $n$ squares and two regular $n$-gons. Moreover, $P_n$ satisfies above properties (i) - (iv). This polytope $P_n$ is called a $2n$-vertex {\em regular prism} or {\em drum} or {\em ladder}.
For $n\geq 3$, let $Q_n$ be the polytope whose vertex-set is
\begin{align*}
&\left\{\!(\sin^2\frac{\pi}{n} + \cos^2\frac{\pi}{2n} )^{-\frac{1}{2}}\!\left(\cos\frac{(2m+1)\pi}{n},
\sin\frac{(2m+1)\pi}{n}, (\sin^2\frac{\pi}{n} -\sin^2\frac{\pi}{2n})^{\frac{1}{2}} \right),\right. \\
&\quad \left.(\sin^2\frac{\pi}{n} + \cos^2\frac{\pi}{2n} )^{-\frac{1}{2}}\!\left(\cos\frac{2m\pi}{n}, \sin\frac{2m\pi}{n}, - (\sin^2\frac{\pi}{n} -\sin^2\frac{\pi}{2n})^{\frac{1}{2}}\right) : 0\leq m\leq n-1\right\}.
\end{align*}
The polytope $Q_3$ is an octahedron and, for $n\geq 4$, the boundary of $Q_n$ consists of $2n$ equilateral triangles and two regular $n$-gons. Moreover, $Q_n$ satisfies all the above four properties. This polytope $Q_n$ is called a $2n$-vertex {\em antiprism}.
\section{Proofs} \label{sec:proofs-1}
Let $F_1\mbox{-}\cdots\mbox{-}F_m\mbox{-}F_1$ be the face-cycle of a vertex $u$ in a map. Then $F_i \cap F_j$ is either $u$ or an edge through $u$. Thus the face $F_i$ must be of the form $u_{i+1}\mbox{-}u\mbox{-}u_i\mbox{-}P_i\mbox{-}u_{i+1}$, where $P_i = \emptyset$ or a path and $P_i \cap P_j = \emptyset$ for $i \neq j$. Here addition in the suffix is modulo $m$. So, $u_{1}\mbox{-}P_1\mbox{-}u_2\mbox{-}\cdots\mbox{-}u_m\mbox{-}P_m\mbox{-}u_1$ is a cycle and said to be the {\em link-cycle} of $u$. For a simplicial complex, $P_i = \emptyset$ for all $i$, and the link-cycle of a vertex is the link of that vertex.
A face in a map of the form $u_1\mbox{-}u_2\mbox{-}\cdots\mbox{-}u_n\mbox{-}u_1$ is also denoted by $u_1u_2\cdots u_n$. The faces with 3, 4, \dots, 10 vertices are called {\em triangle}, {\em square}, \dots, {\em decagon} respectively.
If $X$ is the boundary of a Platonic solid, an Archimedean solid, a regular prism or an antiprism then the vertex-type of $X$ is one of the cycle tuple of the following set.
\begin{align} \label{a-sum<2}
{\mathcal A} := &\left\{[3^3], [3^4], [4^3], [3^5], [5^3], [3^4, 5^1], [3^4, 4^1], [3^1, 5^1, 3^1, 5^1], [3^1, 4^1, 3^1, 4^1], \right. \nonumber \\
& \qquad [3^1, 4^1, 5^1, 4^1], [3^1, 4^3], [5^1, 6^2], [4^1, 6^1, 8^1], [4^1, 6^1, 10^1], [4^1, 6^2], \nonumber \\
&\qquad \left. [3^1, 6^2], [3^1, 8^2], [3^1, 10^2], [3^1, 4^2]\} \cup \{[4^2, r^1], [3^3, s^1], \, r \geq 5, s \geq 4\right\}.
\end{align}
Clearly, if $[p_1^{n_1}, \dots, p_k^{n_{\ell}}]\in \mathcal{A}$ then $\sum\limits_{i=1}^{\ell}\frac{n_i(p_i-2)}{p_i} < 2$. Here we prove the following converse.
\begin{theorem} \label{thm:inequality}
Let $X$ be a semi-equivelar map of type $[p_1^{n_1}, \dots, $ $p_{\ell}^{n_{\ell}}]$ on a $2$-manifold. If $\sum\limits_{i=1}^{\ell}\frac{n_i(p_i-2)}{p_i} < 2$ then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]\in \mathcal{A}$.
\end{theorem}
We need the following technical lemma of \cite{DM2019} to prove Theorem \ref{thm:inequality}.
\begin{lemma} [Datta $\&$ Maity] \label{DM2019}
If $[p_1^{n_1}, \dots, p_k^{n_k}]$ satisfies any of the following three properties then $[p_1^{n_1}$, $\dots, p_k^{n_k}]$ can not be the vertex-type of any semi-equivelar map on a surface.
\begin{enumerate}[{\rm (i)}]
\item There exists $i$ such that $n_i=2$, $p_i$ is odd and $p_j\neq p_i$ for all $j\neq i$.
\item There exists $i$ such that $n_i=1$, $p_i$ is odd, $p_j\neq p_i$ for all $j\neq i$ and $p_{i-1}\neq p_{i+1}$. (Here, addition in the subscripts are modulo $k$.)
\item $[p_1^{n_1}, \dots, p_k^{n_k}]$ is of the form $[p^1, q^m, p^1, r^n]$, where $p, q, r$ are distinct and $p$ is odd.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:inequality}]
Let $d$ be the degree of each vertex in $X$. Consider the $k$-tuple $(q_1^{m_1}, \dots, q_k^{m_k})$, where $3 \le q_1 < \dots <q_k$ and, for each $i=1, \dots,k$, $q_i = p_j$ for some $j$, $m_i = \sum_{p_j = q_i}n_j$. So, $\sum_{i=1}^k m_i = \sum_{j=1}^{\ell}n_j = d$ and $\sum_{i=1}^k \frac{m_i}{q_i} = \sum_{j=1}^{\ell}\frac{n_j}{p_j}$.
Thus,
\begin{align} \label{eq:2}
2> \sum\limits_{j=1}^{\ell}\frac{n_j(p_j-2)}{p_j} = \sum\limits_{j=1}^{\ell} n_j - 2\sum\limits_{j=1}^{\ell} \frac{n_j}{p_j} =
\sum\limits_{i=1}^k m_i - 2\sum\limits_{i=1}^k \frac{m_i}{q_i} = d -2\sum\limits_{i=1}^k \frac{m_i}{q_i}.
\end{align}
So, $d-2 < 2 \sum_{i=1}^k \frac{m_i}{q_i} \leq 2\sum_{i=1}^k \frac{m_i}{3}\leq \frac{2d}{3}$. This implies $3d-6< 2d$ and hence $d<6$. Therefore, $d = 3, 4$ or $5$.
\medskip
\noindent {\it Case 1:} First assume $d = 5$. If $q_1 \geq 4$ then $\frac{m_1}{q_1} + \dots + \frac{m_k}{q_k} \leq \frac{d}{q_1} \leq \frac{5}{4}$. Therefore, by \eqref{eq:2}, $2 > d -\sum_{i=1}^k\frac{m_i}{q_i} \geq 5 - \frac{10}{4} = \frac{10}{4}$, a contradiction. So, $q_1 = 3$. If $m_1 \leq 3$ then $3 = d- 2 < 2(\frac{m_1}{q_1} + \dots + \frac{m_k}{q_k}) \leq 2(\frac{m_1}{q_1} + \frac{d-m_1}{q_2}) \leq 2(\frac{m_1}{3} + \frac{5-m_1}{4}) = \frac{15+m_1}{6} \leq \frac{15+3}{6} =3$, a contradiction. So, $m_1 \geq 4$. Since $m_1 \leq d = 5$, it follows that $m_1 = 4$ or $5$.
\smallskip
\noindent {\it 1.1:} Let $m_1 = 5$. Then, $d = m_1$ and $k = 1$. So, $(q_1^{m_1}, q_2^{m_2}, \dots, q_k^{m_k})
= (3^5)$ and hence $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^5]$.
\smallskip
\noindent {\it 1.2:} Let $m_1 = 4$. Then $m_2 = 1$. Therefore, $3 = d-2 < 2\sum_{i=1}^k\frac{m_i}{q_i} = 2(\frac{m_1}{q_1} + \frac{m_2}{q_2}) = 2(\frac{4}{3} + \frac{1}{q_2})$. This implies ${1}/{q_2} > {3}/{2}-{4}/{3}= {1}/{6}$ and hence $q_2 < 6$. Since $q_2 > q_1 = 3$, $q_2 = 4$ or $5$.
\smallskip
If $q_2=5$, then $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^4, 5^1)$ and hence $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^4, 5^1]$.
Similarly, if $q_2 = 4$ then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^4, 4^1]$.
\medskip
\noindent {\it Case 2:} Now, assume $d = 4$. Then, $1= \frac{d}{2}-1< \sum_{i=1}^k\frac{m_i}{q_i} \leq \frac{d}{q_1} = \frac{4}{q_1}$. So, $q_1 < 4$ and hence $q_1 = 3$.
\smallskip
\noindent {\it 2.1:} If $m_1 = d = 4$, then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^4]$.
\smallskip
\noindent {\it 2.2:} If $m_1 = 3$, then $m_2 = 1$. So, $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^3, q_2^1)$. This implies that $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^3, s^1]$ for some $s \geq 4$.
\smallskip
\noindent {\it 2.3:} If $m_1 = 2$, then $1 = \frac{d}{2}-1 < \sum_{i=1}^k\frac{m_i}{q_i} =\frac{2}{3}+\frac{m_2}{q_2}+\frac{m_3}{q_3} \leq \frac{2}{3} + \frac{2}{q_2}$. So, $\frac{2}{q_2} > \frac{1}{3}$ and hence $q_2< 6$. Thus, $q_2 = 5$ or 4.
\smallskip
\noindent {\it 2.3.1:} If $q_2 = 5$, then $1 = \frac{d}{2}-1 < \frac{2}{3}+\frac{m_2}{5}+\frac{m_3}{q_3}$ and hence $\frac{m_2}{5}+\frac{m_3}{q_3} > \frac{1}{3}$, where $m_2+m_3 = d-m_1 =2$ and $m_2\geq 1$. These imply, $q_3\leq 7$.
If $q_3 = 7$ then $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^2, 5^1, 7^1)$. This implies $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^2, 5^1, 7^1]$ or $[3^1, 5^1, 3^1, 7^1]$. But $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^2, 5^1, 7^1]$ is not possible by Lemma \ref{DM2019} $(i)$ and $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^1, 5^1, 3^1, 7^1]$ is not possible by Lemma \ref{DM2019} $(iii)$. So, $q_3 \neq 7$.
If $q_3 = 6$, then $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^2, 5^1, 6^1)$. Again, by Lemma \ref{DM2019} $(i)$ and $(iii)$, $[3^2, 5^1, 6^1]$ and $[3^1, 5^1, 3^1, 6^1]$ are not vertex-types of any maps. So, $q_3 \neq 6$.
Since $q_3 > q_2 =5$, it follows that $m_2 = 2$ (and $q_2 = 5$). Then $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^2, 5^2)$. By Lemma \ref{DM2019} $(i)$, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] \neq [3^2, 5^2]$. Therefore, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^1, 5^1, 3^1, 5^1]$.
\smallskip
\noindent {\it 2.3.2:} If $q_2 = 4$, then $1= \frac{d}{2}-1< \frac{2}{3}+\frac{m_2}{4}+\frac{m_3}{q_3}$ and hence $\frac{m_2}{4}+\frac{m_3}{q_3} > \frac{1}{3}$.
If $m_2 = 1$ then $m_3 = 1$. So, $\frac{1}{4}+\frac{1}{q_3} > \frac{1}{q_3}$ and hence $4 < q_3 < 12$. Therefore, $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^1, 4^1, q_3^1)$ and hence $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] \neq [3^1, 4^1, q_3^1]$, where $q_3 > 4$. But this is not possible by Lemma \ref{DM2019} $(ii)$. So, $m_2 \neq 1$ and hence $m_2 = 2$. Then $(q_1^{m_1}, \dots, q_k^{m_k}) = (3^2, 4^2)$. By Lemma \ref{DM2019} $(i)$, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] \neq [3^2, 4^2]$. Therefore, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^1, 4^1, 3^1, 4^1]$.
\smallskip
\noindent {\it 2.4:} Let $m_1 = 1$. Then, $1 = \frac{d}{2}-1< \frac{1}{3}+\frac{m_2}{q_2}+\frac{m_3}{q_3}+\frac{m_4}{q_4}$, where $m_2+m_3+m_4= 3$ and $4\leq q_2 < q_3<q_4$. These imply $q_2=4$. If $m_2 = 1$ then $1 < \frac{1}{3} +\frac{1}{4}+\frac{m_3}{q_3}+\frac{m_4}{q_4}) \leq \frac{7}{12} + \frac{2}{q_3}$. So, $\frac{2}{q_3} > \frac{5}{12}$ and hence $q_3 \leq 4=q_2$, a contradiction. Thus, $m_2 \geq 2$ and hence $m_2 = 2$ or $3$.
\smallskip
If $m_2 = 2$, then $m_3 =1$. So, $1 = \frac{d}{2}-1< \frac{1}{3}+\frac{2}{4}+\frac{1}{q_3}$ and hence $\frac{1}{q_3} > 1-\frac{1}{3}-\frac{1}{2} = \frac{1}{6}$. Therefore, $q_3 < 6$ and hence $q_3 = 5$. Then, $(q_1^{m_1}, q_2^{m_2}, \dots, q_k^{m_k}) = (3^1, 4^2, 5^1)$. By Lemma \ref{DM2019} $(ii)$, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] \neq [3^1, 4^2, 5^1]$. Therefore, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^1, 4^1, 5^1, 4^1]$.
\smallskip
If $m_2 = 3$, then $(q_1^{m_1}, q_2^{m_2}, \dots, q_k^{m_k}) = (3^1, 4^3)$ and hence $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^1, 4^3]$.
\medskip
\noindent {\it Case 3:} Finally, assume $d = 3$. Then, $\frac{1}{2} = \frac{d}{2}-1 < \frac{m_1}{q_1} + \frac{m_2}{q_2} + \frac{m_3}{q_3}$, where $m_1+m_2+m_3=3$ and $3\leq q_2< q_3<q_4$. This implies $q_1 < 6$ and hence $q_1 = 3, 4$ or $5$.
\smallskip
\noindent {\it 3.1:} Let $q_1 = 5$. Now, $m_1 = 1, 2$ or $3$. If $m_1 = 2$ then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [5^2, q_2^1]$, where $q_2>5$. This is not possible by Lemma \ref{DM2019} $(i)$. So, $m_1 = 1$ or $3$.
\smallskip
\noindent {\it 3.1.1:} If $m_1 = 1$, then $\frac{1}{2} <\frac{1}{5} + \frac{m_2}{q_2} + \frac{m_3}{q_3}$. So, $\frac{m_2}{q_2} + \frac{m_3}{q_3} > \frac{1}{2}-\frac{1}{5} = \frac{3}{10}$, where $m_2+m_3=2$ and $5=q_1<q_2<q_3$. These imply, $q_2 = 6$. If $m_2 = 1$ then $m_3 = 3-m_1-m_2 = 1$ and hence $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [5^1, 6^1, q_3^1]$, where $q_3 \geq 7$. But, this is not possible by Lemma \ref{DM2019} $(ii)$. Thus, $m_2=2$. Then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]=[5^1, 6^2]$.
\smallskip
\noindent {\it 3.1.2:} If $m_1 = 3$, then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [5^3]$.
\smallskip
\noindent {\it 3.2:} Let $q_1 = 4$. Since $d=3$, $(m_1, \dots, m_k) = (1, 1, 1), (1, 2), (2, 1)$ or $(3)$.
\smallskip
\noindent {\it 3.2.1:} If $(m_1, \dots, m_k) = (1, 1, 1)$, then $\frac{1}{2} <\frac{1}{4}+\frac{1}{q_2}+\frac{1}{q_3}$. So, $\frac{1}{q_2}+\frac{1}{q_3} > \frac{1}{4}$. Since $q_2 < q_3$, it follows that $q_2 < 8$. If $q_2 = 5$ then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [4^1, 5^1, q_3^1]$, $q_3 > 5$. This is not possible by Lemma \ref{DM2019} $(ii)$. So, $q_2 \neq 5$. Similarly, $q_2 \neq 7$. Thus, $q_2 = 6$. Then $\frac{1}{q_3}> \frac{1}{4}-\frac{1}{6} =\frac{1}{12}$ and hence $q_3<12$. Then, by the same argument, $q_3 \neq 9, 11$. So, $q_3 = 8$ or $10$. Therefore, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [4^1, 6^1, 8^1]$ or $[4^1, 6^1, 10^1]$.
\smallskip
\noindent {\it 3.2.2:} If $(m_1, \dots, m_k) = (1, 2)$, then $\frac{1}{2} < \frac{1}{4}+\frac{2}{q_2}$ and hence $4 = q_1 < q_2 < 8$.
Thus, $[p_1^{n_1},$ $\dots, p_{\ell}^{n_{\ell}}] =[4^1, q_2^2]$, $5\leq q_2\leq 7$.
By Lemma \ref{DM2019} $(i)$, $q_2 \neq 5$ or $7$. So, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [4^1, 6^2]$.
\smallskip
\noindent {\it 3.2.3:} If $(m_1, \dots, m_k) = (2, 1)$, then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [4^2, q_2^1]$ for some $q_2 \geq 5$.
\smallskip
\noindent {\it 3.2.4:} If $(m_1, \dots, m_k) = (3)$, then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [4^3]$.
\smallskip
\noindent {\it 3.3:} Let $q_1 = 3$. By Proposition \ref{DM2019}, $(m_1, \dots, m_k) = (3)$ or $(1, 2)$.
\smallskip
\noindent {\it 3.3.1:} If $(m_1, \dots, m_k) = (3)$, then $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [q_1^3]= [3^3]$.
\smallskip
\noindent {\it 3.3.2:} If $(m_1, \dots, m_k) = (1, 2)$, then $\frac{1}{2} < \frac{1}{3} + \frac{2}{q_2}$. So, $q_2 < 12$. Again, by Lemma \ref{DM2019} $(i)$, $q_2$ is not odd. So, $q_2 = 4, 6, 8$ or $10$. Therefore, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^1, 4^2]$, $[3^1, 6^2]$, $[3^1, 8^2]$ or $[3^1, 10^2]$.
This completes the proof.
\end{proof}
In Theorem \ref{thm:inequality}, we do not assume that the map $X$ is finite. As a consequence we prove
\begin{corollary} \label{cor6}
Suppose there exists an $n$-vertex semi-equivelar map on $\mathbb{S}^2$ of vertex-type $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]$. Then $(n, [p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]) = (4, [3^3]), (6, [3^4]), (8, [4^3]), (12, [3^5]), (20, [5^3])$, $(60, [3^4, 5^1]), (24,[3^4, 4^1]), (30, [3^1, 5^1, 3^1, 5^1]), (12, [3^1, 4^1, 3^1, 4^1])$, $(60, [3^1, 4^1, 5^1, 4^1])$, $(24, [3^1$, $4^3])$, $(60, [5^1$, $6^2]),$ $(48, [4^1, 6^1, 8^1]),$ $(120, [4^1$, $6^1, 10^1]),$ $(24, [4^1, 6^2]),$ $(12, [3^1, 6^2]),$ $(24,$ $ [3^1,$ $ 8^2]),$ $(60, [3^1, 10^2])$, $(6, [3^1, 4^2])$, $(2r, [4^2, r^1])$ for some $r \geq 5$ or $(2s, [3^3, s^1])$ for some $s \geq 4$.
\end{corollary}
\begin{proof}
Let $X$ be an $n$-vertex semi-equivelar map on $\mathbb{S}^2$ of vertex-type $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]$. Let $f_1, f_2$ be the number of edges and faces of $X$ respectively. Let $d$ be the degree of each vertex. So, $f_1 = (nd)/2$. Consider the $k$-tuple $(q_1^{m_1}, \dots, q_k^{m_k})$, where $3 \le q_1 < \dots <q_k$ and, for each $i=1, \dots,k$, $q_i = p_j$ for some $j$, $m_i = \sum_{p_j = q_i}n_j$. So, $\sum_i m_i = \sum_{j}n_j = d$ and $\sum_i \frac{m_i}{q_i} = \sum_{j}\frac{n_j}{p_j}$.
A two-way counting the number of ordered pairs $(F, v)$, where $F$ is a $q_i$-gons in $X$ and $v$ is a vertex of $F$, we get: (the number of $q_i$-gons) $\times q_i = n \times m_i$. This implies $f_2 = n \times \sum_{i=1}^k\frac{m_i}{q_i} = n\times\sum_{j=1}^{\ell}\frac{n_j}{p_j}$. Since the Euler characteristic of $\mathbb{S}^2$ is $2$, we get
\begin{align} \label{eq3}
2 & =n-f_1+f_2= n \times (1- \frac{1}{2}\sum_{j=1}^{\ell}n_j +\sum_{j=1}^{\ell}\frac{n_j}{p_j}) = \frac{n}{2} \times (2 - \sum_{j=1}^{\ell}\frac{n_j(p_j-2)}{p_j}).
\end{align}
Thus,
\begin{align} \label{eq4}
n = 4\left(2 - \sum_{j=1}^{\ell}\frac{n_j(p_j-2)}{p_j}\right)^{-1}.
\end{align}
From \eqref{eq3}, we get $\sum_{j=1}^{\ell}\frac{n_j(p_j-2)}{p_j} = 2-\frac{4}{n} <2$.
Therefore, by Theorem \ref{thm:inequality}, $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]\in {\mathcal A}$. The result now follows from
\eqref{eq4} and the set ${\mathcal A}$ given in \eqref{a-sum<2}. (For example, if $[p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}] = [3^4,5^1]$ then $n = 4(2-(\frac{4(3-2)}{3} + \frac{1(5-2)}{5}))^{-1}= 60$. So, $(n, [p_1^{n_1}, \dots, p_{\ell}^{n_{\ell}}]) = (60, [3^4,5^1])$.)
\end{proof}
\begin{lemma}\label{lem3.4}
Let $K$ be a semi-equivelar map on $\mathbb{S}^2$. If the number of vertices and the vertex-type of $K$ are same as those of the boundary $\partial P$ of a Platonic solid $P$ then $K \cong \partial P$.
\end{lemma}
\begin{proof}
If $P$ is the tetrahedron, then $K$ is a $4$-vertex triangulation of $\mathbb{S}^2$ and it is trivial to see that $K$ is unique up to isomorphism.
If $P$ is the octahedron, then $K$ is a $6$-vertex triangulation of $\mathbb{S}^2$ and degree of each vertex is $4$. It is easy to see that $K$ is unique up to an isomorphism (cf. \cite{Da1999}). This also implies that the dual map of $K$ which has $8$ vertices and is of vertex-type $[4^3]$ is unique up to an isomorphism. Hence an 8-vertex map of vertex-type $[4^3]$ on $\mathbb{S}^2$ is isomorphic to the boundary of the cube.
If $P$ is the icosahedron, then $K$ is a $12$-vertex triangulation of $\mathbb{S}^2$ and degree of each vertex is $5$. It is more or less known that it is unique up to an isomorphism (cf. \cite[Table 8]{SL2009}, \cite[Lemma 1]{Up2009}). This also implies that the dual map of $K$ which has $20$ vertices and is of type $[5^3]$ is unique up to an isomorphism.
Hence an 20 vertex map of type $[5^3]$ on $\mathbb{S}^2$ is isomorphic to the boundary of the dodecahedron.
\end{proof}
We need Lemmas \ref{lem3.5}, \ref{lem3.8} and \ref{lem3.10} to prove Lemma \ref{lem3.12}.
\begin{lemma}\label{lem3.5}
Let $X$ be a semi-equivelar map on $\mathbb{S}^2$ of type $[p^1, q^2]$, where $q \geq 6$. If $\alpha, \beta$ are two $p$-gonal faces of $X$ then there exists at most one edge form $\alpha$ to $\beta$.
\end{lemma}
\begin{proof} Since the degree of each vertex is $3$, for each $p$-gonal face $\alpha$ of $X$ and $u \in \alpha$, there exists unique edge of the form $uv$ where $v \not\in \alpha ~ \&~ $ $v$ is in another $p$-gon. Consider the graph $G$ whose nodes are $p$-gonal faces of $X$. Two such nodes $\alpha, \beta$ form a link in $G$ if there exists an edge $uv$ in $X$, where $u \in \alpha ~\&~ v \in \beta$. It is sufficient to show that $G$ is a simple graph.
Let $f_0(X) = n$. Then, by Theorem \ref{thm:inequality}, $(n, [p^1, q^2]) = (12, [3^1, 6^2]), (24, [3^1, 8^2]),$ $(60,$ $ [3^1, 10^2]), (24, [4^1, 6^2])$ or $(60, [5^1, 6^2])$. Since $X$ is a polyhedral map, $G$ has no loop. If possible, let there is a pair of links between two nodes $\alpha ~\&~ \beta$ of $G$. Then there exists $u_1, u_2 \in \alpha ~\&~ v_1, v_2 \in \beta$ such that $u_1v_1, u_2v_2$ are edges in $X$. Let $u_1u_2$ be an edge (of $X$). Let $\gamma$ be the $q$-gonal face through $u_1u_2$. Since the degree of each vertex is $3$, it follows that $u_1v_1~\&~u_2v_2$ are edges of $\gamma$ and hence $v_1v_2$ is a diagonal of $\gamma$. This is not possible since $v_1, v_2 \in \beta$. So, $u_1u_2$ is not an edge. Similarly, $v_1v_2$ is a non-edge. Hence $p > 3$.
Let $p =5$. Then $n = 60$ and $q=6$. Since two $p$-gons in $X$ are disjoint, it follows that $G$ has $12$ nodes. If possible, let $G$ has a double links between two nodes $\alpha$ and $\beta$. This implies, there exists vertices $u_1, u_2 \in \alpha$ and $v_1, v_2 \in \beta$ such that $u_1v_1, u_2v_2$ are edges of $X$. By above $u_1u_2, v_1v_2$ are non-edges, and there is no more edges between $\alpha ~\&~ \beta$. Then $\alpha, \beta$ and the edges $u_1v_1~\&~ u_2v_2$ subdivide $\mathbb{S}^2$ into two disks $D_1, D_2$ (and interiors of $\alpha ~\&~ \beta$). The boundary of $D_1$ contains $u_1, u_2, v_1, v_2$ and $m$ (say) vertices, where $2 \leq m \leq 4$. For each of the $m$ vertices, $w$, there exists an edge $wx$ and a pentagon $\gamma$ containing $x$. Let the number of pentagons inside $D_1$ be $\ell$. Since any double links in $G$ divides $\mathbb{S}^2$ into two parts, there is no double links between the nodes inside $D_1$. So, the pentagons inside $D_1$ form a simple graph. These imply the numbers of edges between $\ell$ nodes is $(5\ell-m)/2$. Then $(5\ell-m)/2 \leq \binom{\ell}{2}$. Thus $\ell(\ell-1) \geq 5\ell -m\geq 5\ell -4$ and hence $\ell^2-6\ell+4\geq 0$. This implies $\ell\geq 6$. So, the number of pentagons inside $D_1$ is $\geq 6$. Similarly, the number of pentagons inside $D_2$ is $\geq 6$. Therefore the number of pentagons in $X$ is $\geq 6+6+2=14$, a contradiction. Thus, $p\neq 5$. By similar arguments $p \neq 4$. This completes the proof.
\end{proof}
The {\em truncation} and {\em rectification} of polytopes are classically known (cf. \cite{Cox1940}). We are omitting the definitions for the sake of brevity.
\begin{proposition}[Coxeter]\label{cox:prop}
{\rm The truncation of tetrahedron (respectively, cube, octahedron, dodecahedron, icosahedron, cuboctahedron and icosidodecahedron) gives the truncated tetrahedron (respectively, truncated cube, truncated octahedron, truncated dodecahedron, truncated icosahedron, great rhombicuboctahedron and great rhombicosidodecahedron). The rectification of cube (respectively, dodecahedron, icosidodecahedron and cuboctahedron) gives the cuboctahedron (respectively, icosidodecahedron, small rhombicosidodecahedron and small rhombicuboctahedron).}
\end{proposition}
Here we present some combinatorial versions of truncation and rectification of polytopes.
\begin{definition}\label{dfn1}
{\rm Let $P$ be a $3$-polytope and $TP$ be the truncation of $P$. Let $X \cong \partial P$ and $V(X) = \{u_1, \dots, u_n\}$. Without loss we identify $X$ with $\partial P$. Consider a new set (of nodes) $V := \{v_{ij} \colon u_iu_j$ is an edge of $X\}$. So, if $v_{ij} \in V$ then $i \neq j$ and $v_{ji}$ is also in $V$. Let $E := \{v_{ij}v_{ji} \colon v_{ij} \in V\} \sqcup \{v_{ij}v_{ik} \colon u_j, u_k$ are in a face containing $u_i, 1\le i \le n\}$. Then $(V, E)$ is a graph on $X$. Clearly, from the construction, $(V, E) \cong $ the edge graph of $TP$. Thus $(V, E)$ gives a map $T(X)$ on $\mathbb{S}^2$. This map $T(X)$ is said to be the} truncation {\rm of $X$}.
\end{definition}
From Definition \ref{dfn1} $\&$ the (geometric) construction of truncation of polytopes we get
\begin{lemma}\label{lem3.8}
Let $X$, $T(X)$ and $TP$ be as in Definition \ref{dfn1}. Then, $T(X)$ is isomorphic to the boundary of $TP$. Moreover, if $X$ is semi-equivelar of type $[q^p]$ $($resp., $[p^1, q^1, p^1, q^1])$, then $T(X)$ is also semi-equivelar and of type $[p^1, (2q)^2]$ $($resp., $[4^1, (2p)^1, (2q)^1])$.
\end{lemma}
\begin{proof} Let $P, V(X), V, E$ be as in Definition \ref{dfn1}. Then, from the definition of truncated polytope and the construction in Def. \ref{dfn1}, $T(X) \cong \partial(TP)$.
Let $X$ be semi-equivelar of $[q^p]$. From the construction in Def. \ref{dfn1}, the set of faces of $T(X)$ is $\{\tilde{\alpha} = v_{i_1i_2}\mbox{-}v_{i_2i_1}\mbox{-}v_{i_2i_3}\mbox{-}v_{i_3i_2}\mbox{-}v_{i_3i_4}\mbox{-}\cdots\mbox{-}v_{i_qi_1}\mbox{-}v_{i_1i_q}\mbox{-}v_{i_1i_2} \colon \alpha = u_{i_1}\mbox{-}u_{i_2}\mbox{-}\cdots\mbox{-}u_{i_q}\mbox{-}u_{i_1}$ is a face of $X\} \sqcup \{\tilde{u_i} = v_{ij_1}\mbox{-}v_{ij_2}\mbox{-}\cdots\mbox{-}v_{ij_q}\mbox{-}v_{ij_1} \colon u_{j_1}\mbox{-}P_1\mbox{-}u_{j_2}\mbox{-}P_2\mbox{-}u_{j_3}\mbox{-}\cdots\mbox{-}P_{q}\mbox{-}u_{j_1}$ is the link\mbox{-}cycle of $u_i\in V(X)\}$.
Thus, the faces through the vertex $v_{ij}$ are the $p$-gonal face $\tilde{u_i}$ and the two $2q$-gonal faces $\tilde{\alpha}~\&~\tilde{\beta}$ where $\alpha~\&~\beta$ are faces in $X$ containing the edge $u_iu_j$. Observe that the face-cycle of $v_{ij}$ is $\tilde{u_i}\mbox{-}\tilde{\alpha}\mbox{-}\tilde{\beta}\mbox{-}\tilde{u_i}$. Thus, $T(X)$ is semi-equivelar and the vertex-type is $[p^1, (2q)^2]$.
Let the vertex-type of $X$ be $[p^1, q^1, p^1, q^1]$. From Def. \ref{dfn1}, the set of faces of $T(X)$ is $\{\tilde{\alpha} := v_{i_1i_2}\mbox{-}v_{i_2i_1}\mbox{-}v_{i_2i_3}\mbox{-}v_{i_3i_2}\mbox{-}v_{i_3i_4}\mbox{-}\cdots\mbox{-}v_{i_ri_1}\mbox{-}v_{i_1i_r}\mbox{-}v_{i_1i_2} \colon \alpha = u_{i_1}\mbox{-}\cdots\mbox{-}u_{i_r}\mbox{-}u_{i_1}$ is a $r$-gonal face of $X, \ r = p, q\}
\sqcup \{\tilde{u_i} := v_{it_1}\mbox{-}v_{it_2}\mbox{-}v_{it_3}\mbox{-}v_{it_4}\mbox{-}v_{it_1} \colon u_{t_1}\mbox{-}P_1\mbox{-}u_{t_2}\mbox{-}P_2\mbox{-}u_{t_3}\mbox{-}P_{3}\mbox{-}u_{t_4}\mbox{-}P_{4}$ $\mbox{-}u_{t_1}$ is the link\mbox{-}cycle of $u_i\in V(X)\}$.
Thus, the faces through the vertex $v_{ij}$ are the square $\tilde{u_i}$, the $2p$-gonal face $\tilde{\alpha}$ and the $2q$-gonal face $\tilde{\beta}$ where $\alpha$ is a $p$-gonal face and $\beta$ is a $q$-gonal face in $X$
containing
the edge
$u_iu_j$. Observe that the face-cycle of $v_{ij}$ is $\tilde{u_i}\mbox{-}\tilde{\alpha}\mbox{-}\tilde{\beta}\mbox{-}\tilde{u_i}$. Thus, $T(X)$ is semi-equivelar and the vertex-type is $[4^1, (2p)^1, (2q)^1]$.
\end{proof}
\begin{definition}\label{dfn2}
{\rm Let $P$ be a polytope and $RP$ be the rectification of $P$. Let $X \cong \partial P$ and $V(X) = \{u_1, \dots, u_n\}$. Without loss we identify $X$ with $\partial P$. Consider the graph $(V, E)$, where $V$ is the edge set $E(X)$ of $X$ and $E := \{ef \colon e, f$ are two adjacent edges in a face of $X\}$. Then $(V, E)$ is a graph on $X$. From the definition of rectification, it follows that $(V, E) \cong $ the edge graph of $RP$. Thus $(V, E)$ gives a map say $R(X)$ on $\mathbb{S}^2$, which is said to be the} rectification {\rm of $X$}.
\end{definition}
From Definition \ref{dfn2} $\&$ the (geometric) construction of rectification of polytopes we get
\begin{lemma}\label{lem3.10}
Let $X$, $R(X)$ and $RP$ be as in Definition \ref{dfn2}. Then, $R(X)$ is isomorphic to the boundary of $RP$.
Moreover, if $X$ is semi-equivelar of type $[q^p]$ $($resp., $[p^1, q^1, p^1, q^1])$, then $R(X)$ is also semi-equivelar
and of vertex-type $[p^1, q^1, p^1, q^1]$ $($resp., $[4^1, p^1, 4^1, q^1])$.
\end{lemma} | {"config": "arxiv", "file": "1804.06692/bddm3re.tex"} |
TITLE: Anti-unitary operator and hamiltonian
QUESTION [1 upvotes]: For a symmetry represented by a unitary operator $U$ to be a dynamical symmetry, we require the condition that
$Ue^{(-iHt/\hbar)}=e^{(-iHt/\hbar)}U$ which implies $UHU^*=H$.
If instead $U$ is an anti-unitary opertor, show that the above equation would imply that $UHU^*=-H$.
I'm not too sure how to do this question. I don't really understand how the first implication is derived from the condition, and secondly I don't see how this changes for an anti-unitary operator.
$H$ is the Hamiltonian, and the definitions of unitary operator and anti-unitary operators are as follows:
A unitary operator $U$ on a Hilbert space is a linear map $U :\mathcal{H} \rightarrow \mathcal{H}$ that obeys $UU^*=U^*U=1_{\mathcal{H}}$ ($U^*$ being the adjoint).
An anti-unitary operator on a hilbert space is a surjective linear map $A :\mathcal{H} \rightarrow \mathcal{H}$ obeying $\langle A\phi |A\psi \rangle = \overline {\langle \phi | \psi \rangle} = \langle \psi | \phi \rangle$
REPLY [1 votes]: A unitary operator is a linear surjective operator $U : {\cal H} \to {\cal H}$ that preserves the norm. It is equivalent to $U^*=U^{-1}$, namely $UU^*=U^*U=I$, where $U^*$ henceforth denotes the adjoint of $U$.
An antiunitary operator is an antilinear surjective operator $U : H \to H$ that preserves the norm. It is equivalent to $U$ bijective such that
$$\langle U\psi|\phi\rangle = \overline{\langle \psi| U\phi\rangle}\:,\quad \forall \psi, \phi \in {\cal H}\:.$$
Now suppose that, in either cases, for all $t\in \mathbb{R}$
$$U e^{-itH} = e^{-itH}U\:.$$
By applying $U^{-1}$ on the right, we get the equivalent condition
$$Ue^{-itH} U^{-1}= e^{-itH}\:.\tag{1}$$
From spectral calculus or other more elementary procedures, e.g., expanding the exponential as a series if $H$ is bounded and paying attention to
$U i H = -iUH$ in view if antilinearity of $U$ if it is the case, (1) entails
$$ e^{\mp itUHU^{-1}} = e^{-itH}\:.$$
Computing the derivative at $t=0$ (Stone's theorem) of both sides (on the relevant dense domain of $H$ which turns out to be invariant under $U^{-1}$, directly form the uniqueness part of Stone's theorem):
$$\pm UHU^{-1} = H\:,$$
that is
$$UHU^{-1} = \pm H\:,\tag{2}$$
where the sign $-$ is reserved to the antiunitary case.
In case of a unitary oparator, we have also found that
$$UHU^{*} = H$$
because $U^*=U^{-1}$. In case of an antiunitary $U$, with a suitable definition ($\dagger$) of adjoint operator for antilinear operators, we can equivalently rewrite (2) as
$$UHU^{*} = -H\:.$$
However the definition of adjoint of an antiunitary operator is usually delicate and, to my personal experience, it is a source of mistakes. Dealing with symmetries it is much better to use $U^{-1}$ in both cases in place of $U^*$.
$(\dagger)$ $\langle \psi|A \phi\rangle = \overline{\langle A^*\psi| \phi\rangle}$ for all $\psi,\phi\in {\cal H}$ assuming $A$ everywhere defined and antilinear.
REPLY [0 votes]: There are a couple of confusing (or even wrong?) points in the post. First, I assume $U^*$ means $U^\dagger$, the adjoint of $U$. A unitary symmetry means $UHU^\dagger=H$.
An anti-unitary operator is first of all an anti-linear operator instead of a linear one. If $U$ is anti-unitary symmetry, then one still has $UHU^\dagger=H$, there should not be an extra minus sign. However, the definition of adjoint for anti-linear operator is different from that of a linear operator.
Edit: the other answer is correct. Usually for a time-reversal symmetry (which is the most common way one gets anti-unitary symmetry) we also take $t$ to $-t$ so $UHU^\dagger=H$. But if $U$ is just anti-unitary without $t$ going to $-t$, then because $Ui=-iU$ we have the extra minus sign. | {"set_name": "stack_exchange", "score": 1, "question_id": 693606} |
TITLE: What is the shortest length of an Egyptian fraction expansion for a given $p/q$?
QUESTION [3 upvotes]: An Egyptian fraction expansion is a sum of reciprocals of integers, for example:
$$\frac{4}{17} = \frac{1}{5} + \frac{1}{29} + \frac{1}{1233} + \frac{1}{3039345}$$
Every positive rational number $p/q$ has such an expansion, although it is not unique:
$$\frac{4}{17} = \frac{1}{5} + \frac{1}{30} + \frac{1}{510}$$
Let $\ell(p/q)$ = the number of terms in a minimal (i.e. fewest terms) Egyptian fraction expansion of $p/q$.
Given a positive rational $p/q$, how can we compute $\ell(p/q)$?
REPLY [3 votes]: If there is an expansion with $k$ terms, one of the denominators is at most $kq/p$. So to check whether there is an expansion with at most $k$ terms: for each $m$ from $\lceil q/p \rceil$ to $\lfloor kq/p \rfloor$, check recursively whether $p/q - 1/m$ has an expansion with at most $k-1$ terms.
Whether there is a polynomial-time algorithm is, I think, an open question. | {"set_name": "stack_exchange", "score": 3, "question_id": 308385} |
TITLE: How to rotate a line in the complex plane?
QUESTION [1 upvotes]: How do I rotate the the line $arg(z) = 0$ by $\frac{\pi}{4}$ radians counter-clockwise about the origin in the complex plane.
The general transformation is $z\mathrm{e}^{\frac{\pi}{4}\mathrm{i}}$ however how do I algebraically find the image of the rotation of the line?
Thanks.
REPLY [0 votes]: If I understand both the OP's desire and gimusi's response correctly, then an alternative approach is to algebraically prove that for complex z and w, arg(zw) = arg(z) + arg(w).
Thus, when |w| = 1, rotating z by arg(w) is equivalent to multiplying z by w. Therefore, rotating z by $\;\pi/4\;$ is equivalent to multiplying z by $\;\frac{1}{\sqrt{2}}(1 + i).$ | {"set_name": "stack_exchange", "score": 1, "question_id": 3393622} |
\begin{document}
\maketitle
\begin{abstract}
The existence of fundamental cardinal exponential B-splines of positive real order $\sigma$ is established subject to two conditions on $\sigma$ and their construction is implemented. A sampling result for these fundamental cardinal exponential B-splines is also presented.
\vskip 5pt\noindent
\textit{Keywords:} Exponential spline, interpolation, fundamental cardinal spline, sampling, Hurwitz zeta function, Kramer's lemma.
\vskip 5pt\noindent
\textit{MSC (2020):} 11M35, 65D05, 65D07, 94A11, 94A20
\end{abstract}
\section{Introduction}
Cardinal exponential B-splines of order $n\in \N$ are defined as $n$-fold convolution products of exponential functions of the form $e^{a (\cdot)}$ restricted to the unit interval $[0,1].$ More precisely, let $n\in \N$ and $\boldsymbol{a}:=(a_1, \ldots, a_n)\in \R^n$, with at least one $a_j\neq 0$, $j = 1, \ldots, n$. A {\em cardinal exponential B-spline of order $n$ associated with the $n$-tuple of parameters $\bfa$} is defined by \be\label{regE}
E_n^{\bfa} :=\underset{j = 1}{\overset{n}{*}} \left(e^{a_j (\mydot)}\chi\right),
\ee
where $\chi$ denotes the characteristic function of the unit interval $[0,1]$.
This wider class of splines shares several properties with the classical Schoenberg polynomial B-splines, but there are also significant differences that makes them useful for different purposes. In \cite{CM}, an explicit formula for these functions was established and those cases characterized for which the integer translates of an exponential B-spline form a partition of unity up to a nonzero multiplicative factor. In addition, series expansions for $L^2(\R)$--functions in terms of shifted and modulated versions of exponential B-splines were derived and dual pairs of Gabor frames based on exponential B-splines constructed. We remark that exponential B-splines have also been employed to construct multiresolution analyses and to obtain wavelet expansions. (See, e.g., \cite{LY,unserblu00}.) Furthermore, in \cite{CS} it is shown that exponential splines play an important role in setting up a one-to-one correspondence between dual pairs of Gabor frames and dual pairs of wavelet frames. For an application to some numerical methods, we refer the interested reader to \cite{EKD} and \cite{KED}.
\nl
In \cite{m14}, a new class of more general cardinal exponential B-splines, so-called cardinal exponential B-splines of complex order, was introduced, some properties derived and connections to fractional differential operators and sampling theory exhibited.
Classical polynomial B-splines $B$ can be used to derive fundamental splines which are linear combinations of integer-translate of $B$ and which interpolate the set of data points $\{(m, \delta_{m,0}) : m\in \Z\}$. As it turns out, even generalizations of these polynomial B-splines, namely, polynomial B-splines of complex and even quaternionic order, do possess associated fundamental splines provided the order is chosen to lie in certain nonempty subregions of the complex plane or quaternionic space. For details, we refer the interested reader to \cite{fgms} in the former case and to \cite{hm} in the latter.
In this article, we consider cardinal exponential B-splines of positive real order, so-called cardinal fractional exponential B-splines (to follow the terminology already in place for the polynomial B-splines). As we only deal with cardinal splines, we drop the adjective ``cardinal" from now on. By extending the integral order $n\in\N$ of the classical exponential B-splines to real orders $\sigma > 1$, one achieves a higher degree of regularity at the knots.
The structure of this paper is as follows. In Section 2 we define fractional exponential splines and present those properties that are important for the remainder of this article. The fundamental exponential B-spline is constructed in Section 3 following the procedure for the polynomial splines. However, as the Fourier transform of an exponential B-spline includes an additional positive term, the construction and the proof of existence of fundamental exponential B-splines associated with fractional exponential B-splines is more involved. Section 4 deals with a sampling result for fundamental exponential B-splines.
\section{Fractional Exponential B-Splines}\label{sec2}
In order to extend the classical exponential B-splines to incorporate real orders $\sigma$, we work in the Fourier domain. To this end, we take the Fourier transform of an exponential function of the form $e^{ -a x}\chi$, $a\in \R$, and define a \emph{fractional exponential B-spline} in the Fourier domain by
\be\label{expspline}
\widehat{E^\sigma_a} (\xi):= \cF(\xi) := \int_\R e^{ -a x}\chi(x)\,e^{- i x \xi}\,dx = \left(\frac{1-e^{-(a+i\xi)}}{a+i\xi}\right)^\sigma,\quad\xi\in \R.
\ee
Note that we may interpret the above Fourier transform for real-valued argument $\xi$ as a Fourier transform for complex-valued argument by setting $z:= \xi + i\,a$:
\be\label{complexF}
\widehat{E^\sigma_a} (\xi) = \cF(z) := \int_\R \chi(x)\,e^{- z x}\,dx,\quad z\in \C.
\ee
It can be shown \cite{m14} (for complex $\sigma$) that the function
\[
\Xi(\xi, a) := \frac{1-e^{-(a+i\xi)}}{a+i\xi},
\]
is only well-defined for $a \geq 0$. As $a=0$ yields fractional polynomial B-splines, we assume henceforth that $a > 0$.
From \cite{m14}, we immediately derive the time domain representation for a fractional exponential B-spline $E^a_\sigma$ assuming $\sigma > 1$:
\be\label{timerep}
E_a^\sigma (x) = \frac{1}{\Gamma(\sigma)}\,\sum_{k=0}^\infty \binom{\sigma}{k} (-1)^k e^{-k a} e_+^{-a(x-k)}\,(x-k)_+^{\sigma-1},
\ee
where $e_+^{(\cdot)} := \chi_{[0,\infty)}\,e^{(\cdot)}$ and $x_+ := \max\{x,0\}$. It was shown that the sum converges both point-wise in $\R$ and in the $L^2$--sense.
Next, we summarize some additional properties of exponential B-splines.
\begin{proposition}\label{prop1}
$\abs{\widehat{E^\sigma_a}}\in \cO(\abs{\xi}^{-\sigma})$ as $\abs{\xi}\to\infty$.
\end{proposition}
\begin{proof}
This follows directly from the following chain of inequalities:
\begin{align*}
\abs{\widehat{E^\sigma_a}(\xi)} = \abs{\left(\frac{1-e^{-(a+i\xi)}}{a+i\xi}\right)^\sigma} \leq \frac{2^\sigma}{\abs{a + i \xi}^\sigma} = \frac{2^\sigma}{(a^2+\xi^2)^{\sigma/2}}\leq \frac{2^\sigma}{\abs{\xi}^\sigma},\qquad\abs{\xi} \gg 1.
\end{align*}
\end{proof}
\begin{proposition}
${E^\sigma_a}$ is in the Sobolev space $W^{s,2}(\R)$ for $s < \sigma -\frac12$.
\end{proposition}
\begin{proof}
This is implied by Proposition \ref{prop1} and the corresponding result for polynomial B-splines (cf. \cite[Section 5.1]{forster06}).
\end{proof}
\begin{proposition}\label{prop3}
${E^\sigma_a}\in C^{\lfloor\sigma\rfloor - 1}(\R)$.
\end{proposition}
\begin{proof}
The function $\xi\mapsto \frac{\xi^n}{(a^2+\xi^2)^{\sigma/2}}$ is in $L^1(\R)$ only if $n \leq \lfloor\sigma\rfloor - 1$.
\end{proof}
\section{The Interpolation Problem for Fractional Exponential B-Splines}
In order to solve the cardinal spline interpolation problem using the classical Curry-Schoenberg splines \cite{chui,schoenberg}, one constructs a fundamental cardinal spline function that is a linear bi-infinite combination of polynomial B-splines $B_n$ of fixed order $n\in \N$ which interpolates the data set $\{\delta_{m,0}: m\in \Z\}$. More precisely, one looks for a solution of the bi-infinite system
\be\label{intprob}
\sum_{k\in \Z} c_k^{(n)} B_n \left(\frac{n}{2} + m - k\right) = \delta_{m,0},\quad m\in \Z,
\ee
i.e., for a sequence $\{c_k^{(n)}: k\in \Z\}$. The left-hand side of (\ref{intprob}) defines the fundamental cardinal spline $L_n:\R\to\R$ of order $n\in \N$. A formula for $L_n$ is given in terms of its Fourier transforms by
\be\label{fundspline}
\hL_n (\xi) = \frac{\left(\hB_n (\cdot + \frac{n}{2})\right)(\xi)}{\displaystyle{\sum_{k\in \Z}}\,\left(\hB_n (\cdot + \frac{n}{2})\right)(\xi + 2\pi k)}.
\ee
Using the Euler-Frobenius polynomials associated with the B-splines $B_n$, one can show that the denominator in (\ref{fundspline}) does not vanish on the unit circle $|{z}| = 1$, where ${z} = e^{-i \xi}$. For details, see \cite{chui, schoenberg}.
One of the goals in the theory of fractional exponential B-splines is to construct a {\em fundamental cardinal exponential spline $L_a^\sigma:\R\to \R$ of real order $\sigma>1$} of the form
\be\label{compint2}
L_a^\sigma := \sum_{k\in \Z} c_k^{(\sigma)} E_a^\sigma \left(\mydot - k\right),
\ee
satisfying the interpolation problem
\be\label{complexint2}
L_a^\sigma (m) = \delta_{m,0}, \quad m\in \Z,
\ee
for an appropriate bi-infinite sequence $\{c_k^{(\sigma)} : k\in \Z\}$ and for an appropriate $\sigma$ belonging to some nonempty subset of $\R$.
Taking the Fourier transform of (\ref{compint2}) and (\ref{complexint2}), applying the Poisson summation formula and eliminating the expression containing the unknowns $\{c_k^{(z)}: k\in \Z\}$, a formula for $L_a^\sigma$ similar to (\ref{fundspline}) is, at first, formally obtained:
\be\label{compfundspline}
\wh{L_a^\sigma} (\xi) = \frac{\wh{E_a^\sigma} (\xi)}{\displaystyle{\sum\limits_{k\in \Z}}\,\wh{E_a^\sigma} (\xi + 2\pi k)}.
\ee
Inserting \eqref{expspline} into the above expression for $\wh{L_a^\sigma}$ and simplifying yields
\[
\wh{L_a^\sigma} (\xi) = \frac{1/(\xi + i a)^\sigma}{\displaystyle{\sum_{k\in \Z}}\, \frac{1}{[\xi + 2\pi k + i a)]^\sigma}},\quad \sigma > 1.
\]
As the denominator of (\ref{compfundspline}) is $2\pi$-periodic in $\xi$, we may assume without loss of generality that $\xi\in [0,2\pi]$. Let $q:= q(a):=\frac{\xi + i a}{2\pi}$, and note that $0\leq \Re q\leq 1$ and $\Im q > 0$. The denominator in the above expression for $\wh{L_a^\sigma}$ can then - after cancelation of the $(2\pi)^\sigma$ term - be formally rewritten in the form
\begin{align}
\sum_{k\in \Z}\, \frac{1}{(q + k)^\sigma} &= \sum_{k=0}^\infty\, \frac{1}{(q + k)^\sigma} + \sum_{k=0}^\infty\, \frac{1}{(q - (1 + k))^\sigma}
\nonumber\\
&= \sum_{k=0}^\infty\, \frac{1}{(q + k)^\sigma} + e^{-i \pi \sigma}\sum_{k=0}^\infty\, \frac{1}{(1 - q + k)^\sigma}
\nonumber\\
&= \zeta (\sigma,q) + e^{-i \pi \sigma}\,\zeta (\sigma, 1 -q),
\label{Zerlegung in zetas}
\end{align}
where we take the principal value of the multi-valued function $e^{-i \pi (\cdot)}$ and where $\zeta(\sigma,q)$, $q\notin\Z_0^-$, denotes the generalized zeta function \cite[Section 1.10]{erdelyi} which agrees with the Hurwitz zeta function when $\Re q > 0$, the case we are dealing with here.
For $\xi = 0$, we have $\Re q = 0$ and thus
\begin{align*}
\sum_{k=0}^\infty\, \frac{1}{\abs{q + k}^\sigma} &= \sum_{k=0}^\infty\, \frac{1}{(k^2 + a^2/4\pi^2)^{\sigma/2}} \\
&\leq \left(\frac{2\pi}{a}\right)^\sigma + \int_0^1 \frac{dx}{(x^2 + a^2/4\pi^2)^{\sigma/2}} + \int_1^\infty \frac{dx}{(x^2 + a^2/4\pi^2)^{\sigma/2}} < \infty,
\end{align*}
as $\sigma > 1$. The last integral above evaluates to
\[
\left(\frac{a}{2\pi}\right)^{-\sigma}\left[\left(\frac{a}{4\pi}\right)\,B\left(\frac12,\frac{\sigma-1}{2}\right) - \,_2F_1\left(\frac{1}{2},\frac{\sigma }{2};\frac{3}{2};-\frac{4 \pi^2}{a^2}\right)\right],
\]
where $B$ and $_2F_1$ denote the Beta and Gauss's hypergeometric function, respectively. (See, e.g., \cite{GR}.)
Replacing in the above expression $k$ by $k+1$, one shows in a similar fashion that $\sum\limits_{k=0}^\infty\, \frac{1}{(1 - q + k)^\sigma}$ also converges absolutely. Hence, $\wh{L_a^\sigma}$ is defined and finite at $\xi = 0$.
For $\xi = 2\pi$, $q$ and $1-q$ are interchanged and we immediately obtain from the above arguments that $\wh{L_a^\sigma}$ is defined and finite at $\xi = 2\pi$. Thus, it suffices to consider $0 < \Re q < 1$.
Next, we show that the denominator in (\ref{compfundspline}) does not vanish, i.e., that $L_a^\sigma$ is well-defined for appropriately chosen $\sigma$. To this end, it suffices to find conditions on $\sigma$ such that the function
$$
Z(\sigma, q) :=\zeta (\sigma,q) + e^{-i \pi \sigma}\,\zeta (\sigma, 1 - q)
$$
has no zeros for all $\Re q\in (0,1)$ and a fixed $a >0$.
We require the following lemma which is based on a result in \cite{spira} for the case of real $q$.
\begin{lemma}\label{lem1}
Let $q = u + i v$ where $0 < u := \frac{\xi}{2\pi} < 1$ and $v := \frac{a}{2\pi} > 0$. If \[
\sigma > \sigma_0 := \tfrac12+\sqrt{2}\sqrt{1+v^2+v^4},
\]
then $\zeta (\sigma, q) \neq 0$.
\end{lemma}
\begin{proof}
We have that
\begin{align*}
\abs{\zeta(\sigma, q)} & \geq \frac{1}{\abs{q}^\sigma} - \sum_{k\geq 1}\frac{1}{\abs{k+q}^\sigma} \\
& > \frac{1}{\abs{q}^\sigma} - \frac{1}{\abs{q+1}^\sigma} - \int_1^\infty \frac{dx}{\big[(u+x)^2 + v^2\big]^{\sigma/2}}.
\end{align*}
Now, for $x\geq 1$, $0<u<1$ and $v>0$,
\[
\sqrt{(u+x)^2 + v^2} \geq \sqrt{(u+1)^2 + v^2} + \frac{(1+u)(x-1)}{\sqrt{(u+1)^2 + v^2}}
\]
as can be shown by direct computation:
\[
(u+x)^2 + v^2 - \left(\sqrt{(u+1)^2 + v^2} + \frac{(1+u)(x-1)}{\sqrt{(u+1)^2 + v^2}}\right)^2 = \frac{(x-1)^2 v^2}{(u+1)^2 + v^2}\geq 0.
\]
The above inequality shows that
\begin{align*}
\int_1^\infty \frac{dx}{\big[(u+x)^2 + v^2\big]^{\sigma/2}} & < \int_1^\infty \frac{dx}{\left(\sqrt{(u+1)^2 + v^2} + \frac{(1+u)(x-1)}{\sqrt{(u+1)^2 + v^2}}\right)^\sigma}\\
& = \frac{\left[(u+1)^2+v^2\right]^{1-\sigma/2}}{(\sigma -1 )(u+1)}.
\end{align*}
Therefore,
\[
\abs{\zeta(\sigma, q)} > \frac{1}{\abs{q}^\sigma} - \frac{1}{|q+1|^\sigma} - \frac{\left[(u+1)^2+v^2\right]^{1-\sigma/2}}{(\sigma -1 )(u+1)},
\]
and the right-hand side of this inequality is strictly positive if
\be\label{1}
\frac{1}{\abs{q}^\sigma} > \frac{1}{|q+1|^\sigma} + \frac{\left[(u+1)^2+v^2\right]^{1-\sigma/2}}{(\sigma -1 )(u+1)}.
\ee
Replacing $q$ by $u + i v$ and simplifying shows that inequality \eqref{1} is equivalent to
\be\label{12}
\left(1+\frac{2u+1}{u^2+v^2}\right)^{\sigma/2} > 1 + \frac{(u+1)^2+v^2}{(\sigma -1 )(u+1)}.
\ee
Employing the Bernoulli inequality to the expression on the left-hand side of \eqref{12}, yields
\[
\left(1+\frac{2u+1}{u^2+v^2}\right)^{\sigma/2} \geq 1 + \frac{\sigma}{2}\frac{2u+1}{u^2+v^2},
\]
which implies that \eqref{12} holds if
\[
\frac{\sigma}{2}\cdot\frac{2u+1}{u^2+v^2} > \frac{(u+1)^2+v^2}{(\sigma -1 )(u+1)},
\]
or, equivalently,
\[
\sigma\, (\sigma -1) > \frac{2(u^2+v^2)[(u+1)^2+v^2]}{(1+u)(1+2u)}.
\]
Performing the polynomial division on the right-hand side of the above inequality produces
\[
\sigma\, (\sigma -1) > 2v^2+u^2+\tfrac12 u - \tfrac14+\frac{\frac14+2v^4+\frac14 u -2 u v^2}{(1+u)(1+2u)}
\]
and this inequality holds if
\be\label{3}
\sigma\, (\sigma -1) > 2v^2 + 2v^4 -\tfrac74,
\ee
where we used the fact that $0<u<1$.
Thus, inequality \eqref{3} holds if
\[
\sigma > \sigma_0 := \tfrac12+\sqrt{2}\sqrt{1+v^2+v^4}.\qedhere
\]
\end{proof}
\begin{theorem}
The function $Z(\sigma, q) = \zeta (\sigma,q) + e^{-i \pi \sigma}\,\zeta (\sigma, 1 - q)$ with $q = \frac{1}{2\pi}(\xi + i\,{a})$ has no zeros provided
\be\label{sigma}
\sigma \geq \sigma_0 = \tfrac12+\sqrt{2}\sqrt{1+\frac{a^2}{4\pi^2}+\frac{a^4}{16\pi^4}}
\ee
and
\be\label{2}
\frac{\pi}{2}(\sigma -1) + \Arg \left(\zeta (\sigma, \tfrac12 -i\,\tfrac{a}{2\pi})\right) \notin \pi\N.
\ee
\end{theorem}
\begin{proof}
For a given $q$, we consider three cases: (I) $0 < \re q < \frac12$, (II) $\frac12 < \re q < 1$, and (III) $\re q = \frac12$.
To this end, fix $a > 0$ and choose $\sigma > \sigma_0$. Note that the above argument employed to derive $\sigma_0$ also applies to the case of $q$ being replaced by$1-q = 1-u - iv$ and yields the same value. Thus, by Lemma \ref{lem1}, $\zeta(\sigma, q) \neq 0$ and $\zeta(\sigma, 1-q) \neq 0$.
\nl
Case I: If $0 < \re q < \frac12$ then $\abs{k+q} < \abs{k+1-q}$, for all $k\in \N_0$. Therefore,
\begin{align*}
\abs{e^{-i\,\pi\sigma}\,\zeta(\sigma, 1 -q)} & < \sum_{k=0}^\infty \frac{1}{\abs{k+q}^\sigma} = \abs{\zeta(\sigma, q)}.
\end{align*}
Similarly, one obtains in Case II with $\frac12 < \re q < 1$ that
\begin{align*}
\abs{\zeta(\sigma, q)} & < \abs{e^{-i\,\pi\sigma}\,\zeta(\sigma, 1 -q)}.
\end{align*}
Hence, $\abs{Z(\sigma, q)} \geq \abs{\abs{\zeta(\sigma, q)}-\abs{\zeta(\sigma, 1-q)}} > 0$, for $\re q \neq \frac12$.
In Case III with $\re q = \frac12$ and $\sigma$ satisfying \eqref{sigma}, we set $q^* := \frac12 + i\,\frac{a}{2\pi}$ and observe that
\[
\zeta(\sigma, q^*) = \sum\limits_{k=0}^\infty \frac{1}{(k+\frac12 + i\,\frac{a}{2\pi})^\sigma}
\]
and
\[
\zeta(\sigma, 1- q^*) = \sum\limits_{k=0}^\infty \frac{1}{(k+\frac12 - i\,\frac{a}{2\pi})^\sigma}.
\]
Hence, $\zeta(\sigma, 1- q^*) = \overline{\zeta(\sigma, q^*)}$ and therefore
\[
Z(\sigma, q^*) = \zeta(\sigma, q^*) + e^{-i\,\pi\sigma} \overline{\zeta(\sigma, q^*)} =
\zeta(\sigma, q^*) \left(1 + e^{-i\,\pi\sigma}\, \frac{\overline{\zeta(\sigma, q^*)}}{ \zeta(\sigma, q^*)}\right).
\]
Setting for simplicity $\zeta^* := \zeta(\sigma, q^*)$, the expression in parentheses becomes zero if
\[
1 + e^{-i\,\pi\sigma}\,\left( \frac{\overline{\zeta^*}}{ \zeta^*} \right)= 0
\]
or, equivalently, as $\frac{\overline{\zeta^*}}{ \zeta^*} = \exp (-2 i \arg \zeta^*)$,
\[
\exp(-i\,\pi\sigma -2 i \arg \zeta^*) = \exp(i\,\arg (-1)).
\]
Using the principal values of $\arg$, $\Arg$, this latter equation can be rewritten as
\[
\frac{\pi}{2}\,\sigma + \Arg(\zeta^*) = (2m + 1)\,\frac{\pi}{2}, \quad m\in \Z.
\]
Note that $\sigma \geq 1 + \sqrt{1 + \frac{a^2}{4\pi^2}} > 2$ and that $\Arg(\zeta^*)\in (-\pi, \pi]$ (taking the negative real axis as a branch cut) and therefore, we need to impose condition \eqref{2} to ensure that $Z(\sigma, q^*) \neq 0$ as $\zeta(\sigma, q^*) \neq 0$.
\end{proof}
\begin{remark}
As $-\pi < \Arg z \leq\pi$, condition \eqref{2} can for fixed $a>0$ have at most one solution.
\end{remark}
\begin{remark}
Note that if $q\in \R$, i.e., $ a = 0$, we obtain the conditions derived in \cite{FM} for polynomial B-splines of fractional order.
\end{remark}
\begin{definition}
We call real orders $\sigma$ that fulfill conditions \eqref{sigma} and \eqref{2} for a fixed $a > 0$ admissible.
\end{definition}
\begin{example}
Let $\sigma := \sqrt{6}$ and $a:=2$. Hence, $\sqrt{6} > \sigma_0 \approx 1.99103$ and condition \eqref{sigma} holds. A numerical evaluation of $\zeta( \sqrt{6}, \frac12+\frac{i}{\pi})$ using Mathematica's HurwitzZeta function produces the value $\zeta^* = 1.19269 - i\,3.76542$. The principal value of $\arg\zeta^*$ is therefore $\Arg\zeta^* = -1.26405$. As $\frac{\pi}{2}(\sigma -1) + \Arg\zeta^* = 1.01281 \notin \pi\N$, the second condition \eqref{2} is also satisfied. Thus, $\sigma := \sqrt{6}$ is an admissible real order. By the continuous dependence of conditions \eqref{sigma} and \eqref{2} on $\sigma$, there exist therefore uncountably many admissible $\sigma$.
\end{example}
\begin{example}
Selecting $a := 2$, the value $\sigma \approx 4.68126$ is not admissible as condition \eqref{sigma} is satisfied but the left-hand side of \eqref{2} yields the value $\pi$. (See Figure \ref{fig0} below.)
\begin{figure}[h!]
\begin{center}
\includegraphics[width=5cm, height= 3cm]{fig0.pdf}
\caption{Example of a non-admissible $\sigma$.}\label{fig0}
\end{center}
\end{figure}
\end{example}
We finally arrive at one of the main results.
\begin{theorem}
Suppose that ${E}_{a}^\sigma$ is an exponential B-spline of admissible real order $\sigma$.
Then
\be\label{L}
{L}_a^\sigma (x) := \frac{1}{2\pi}\,\int\limits_{\R} \frac{((\xi + i a)/2\pi)^{-\sigma}\,e^{i \xi x}\,d\xi}{\zeta (\sigma, (\xi + i\,a)/2\pi) + e^{-i \pi \sigma}\zeta (\sigma, 1 - (\xi + i\,a)/2\pi)}
\ee
is a fundamental exponential interpolating spline of real order $\sigma$ in the sense that
\[
{L}_a^\sigma ({m}) = \delta_{{m},0}, \quad \mbox{for all }{m}\in \Z.
\]
The Fourier inverse in \eqref{L} holds in the $L^1$ and $L^2$ sense.
\end{theorem}
Let
\[
h(\xi, a, \sigma) := \frac{((\xi + i a)/2\pi)^{-\sigma}}{\zeta (\sigma, (\xi + i\,a)/2\pi) + e^{-i \pi \sigma}\zeta (\sigma, 1 - (\xi + i\,a)/2\pi)}.
\]
\nl
The following figures show $\abs{h}$, $\Re h$, and $\Im h$ as functions of fixed
$a := 2$ and varying $\sigma\in \{2.5, 2.75, 3, 3.5\}$, and for fixed $\sigma:= \sqrt{6}$ and varying $a\in \{2, 3, 4, 5\}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm, height= 4cm]{abs_sigma.pdf}\qquad\includegraphics[width=6cm, height= 4cm]{abs_a.pdf}
\caption{$\abs{h(\xi, a, \sigma)}$ for fixed $a$ and varying $\sigma$ (left) and for fixed $\sigma$ and varying $a$ (right).}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm, height= 4cm]{re_sigma.pdf}\qquad\includegraphics[width=6cm, height= 4cm]{re_a.pdf}
\caption{$\Re h(\xi, a, \sigma)$ for fixed $a$ and varying $\sigma$ (left) and for fixed $\sigma$ and varying $a$ (right).}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm, height= 4cm]{im_sigma.pdf}\qquad\includegraphics[width=6cm, height= 4cm]{im_a.pdf}
\caption{$\Im h(\xi, a, \sigma)$ for fixed $a$ and varying $\sigma$ (left) and for fixed $\sigma$ and varying $a$ (right).}
\end{center}
\end{figure}
\begin{example}
We choose again $a:=2$ and $\sigma \in\{\sqrt{6}, 3.5, 4.25\}$. Figure \ref{fig4} below displays the graph of the fundamental exponential interpolating spline $L_2^{\sigma}$.
\end{example}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm, height= 5cm]{L.pdf}\
\caption{The fundamental exponential interpolating splines $L_2^{\sigma}$ with $\sigma \in\{\sqrt{6}, 3.5, 4.25\}$.}\label{fig4}
\end{center}
\end{figure}
\begin{proposition}
The coefficients $c_k^{(\sigma)}$ in Eqn. \eqref{compint2} decay like
\[
\abs{c_k^{(\sigma)}} \leq C_\sigma\,\abs{k}^{\lfloor\sigma\rfloor -1},
\]
for some positive constant $C_\sigma$. Therefore, the fundamental exponential spline $L_a^\sigma$ with admissible $\sigma$ satisfies the pointwise estimate
\[
\abs{L_a^\sigma (x)} \leq M_q\,\abs{x}^{-\lfloor\sigma\rfloor}, \quad x\in\R,
\]
where $M_\sigma$ denotes a positive constant.
\end{proposition}
\begin{proof}
Eqns. \eqref{compint2} and \eqref{compfundspline} together with the Poisson summation formula yield
\[
\sum_{k\in\Z} c_k^{(\sigma)} w^k = \frac{1}{\sum\limits_{k\in\Z} E_a^\sigma (k)\, w^k} =: \vartheta_a^\sigma (w), \quad w = e^{i \xi}.
\]
The function $\vartheta_a^\sigma (w)$ has no zeros on the unit circle $\abs{w}=1$ provided $\sigma$ is admissible. Proposition \ref{prop3} implies that the Fourier coefficients of $\vartheta_a^\sigma (w)$ satisfy
\[
\abs{c_k^{(\sigma)}} \leq C_\sigma\,\abs{k}^{\lfloor\sigma\rfloor -1},
\]
for some positive constant $C_\sigma$.
Noting that $\supp E_a^\sigma (\mydot - k) = [k,\infty)$, $k\in \Z$, we thus obtain using Eqn. \eqref{compint2}
\[
\abs{L_a^\sigma (x)} = \abs{\sum_{k=-\infty}^{\lfloor x\rfloor} c_k^{(\sigma)}\,E_a^\sigma (x-k)} \leq K_\sigma \sum_{k=-\infty}^{\lfloor x \rfloor} \abs{k}^{-\lfloor\sigma\rfloor} \leq M_q\,\abs{x}^{-\lfloor\sigma\rfloor}.
\]
Here, we used the boundedness of $E_a^\sigma$ on $\R$. (Cf. for instance \cite[Proposition 4.5]{m14}.)
\end{proof}
\section{A Sampling Theorem}
In this section, we derive a sampling theorem for the fundamental cardinal exponential spline $L_a^\sigma$, where $\sigma$ satisfies conditions \eqref{sigma} and \eqref{2}. For this purpose, we employ the following version of Kramer's lemma \cite{kramer} which appears in \cite{garcia}. We summarize those properties that are relevant for our needs.
\begin{theorem}\label{gensamp}
Let $\emptyset\neq I$, $M\subseteq\R$ and let $\{\varphi_k: k\in\Z\}$ be an orthonormal basis of $L^2(I)$. Suppose that $\{S_k: k\in \Z\}$ is a sequence of functions $S_k: M\to\C$ and $\boldsymbol{t} := \{t_k\in \R: k\in \Z\}$ a numerical sequence in $M$ satisfying the conditions
\begin{enumerate}
\item[C1.] $S_k(t_l) = a_k \delta_{kl}$, $(k,l)\in \Z\times \Z$, where $a_k\neq 0$;
\item[C2.] ${\sum\limits_{k\in \Z}}\,\vert S_k(t)\vert^2 < \infty$, for each $t\in \xi$.
\end{enumerate}
Define a function $K:I\times M \to \C$ by
\[
K(x,t) := \sum_{k\in \Z} S_k (t) \overline{\varphi_k} (x),
\]
and a linear integral transform $\mcK$ on $L^2 (I)$ by
\[
(\mcK f)(t) := \int_I f(x) K(x,t) \, dx.
\]
Then $\mcK$ is well-defined and injective. Furthermore, if the range of $\mcK$ is denoted by
\[
\mcH := \left\{g:\R\to\C : g = \mcK f, \,f\in L^2(I)\right\},
\]
then
\begin{enumerate}
\item[(i)] $(\mcH, \inn{\cdot}{\cdot}_\mcH)$ is a Hilbert space isometrically isomorphic to $L^2(I)$, $\mcH \cong L^2(I)$, when endowed with the inner product
\[
\inn{F}{G}_\mcH := \inn{f}{g}_{L^2(I)},
\]
where $F := \mcK f$ and $G = \mcK g$.
\item[(ii)] $\{S_k: k\in \Z\}$ is an orthonormal basis for $\mcH$.
\item[(iii)] Each function $f\in \mcH$ can be recovered from its samples on the sequence $\{t_k: k\in \Z\}$ via the formula
\[
f(t) = \sum_{k\in \Z} f(t_k)\,\frac{S_k (t)}{a_k}.
\]
The above series converges absolutely and uniformly on subsets of $\R$ where the kernel $K(\,\cdot\,, t)$ is bounded in $L^2(I)$.
\end{enumerate}
\end{theorem}
\begin{proof}
For the proof and further details, we refer to \cite{garcia}.
\end{proof}
For our purposes, we choose $M := \R$, $\boldsymbol{t} := \Z$, $a_k = 1$ for all $k\in\Z$, and for the interpolating function $S_{k} = L_a^{\sigma}(\cdot - k)$ with $\sigma$ satisfying conditions \eqref{sigma} and \eqref{2}. Then Theorem \ref{gensamp} implies the next result.
\begin{theorem}\label{S Abtastsatz}
Let $\emptyset\neq I\subseteq\R$ and let $\{\varphi_k: k\in\Z\}$ be an orthonormal basis of $L^2(I)$. Let $L_a^\sigma$ denote the fundamental cardinal spline of admissible real order $\sigma$. Then the following holds:
\begin{enumerate}
\item[(i)] The family $\{L_a^\sigma (\cdot - k): k\in \Z\}$ is an orthonormal basis of the Hilbert space $(\mcH, \inn{\cdot}{\cdot}_\mcH)$, where $\mcH = \mcK (L^2(I))$ and $\mcK$ is the injective integral operator
\[
\mcK f = \sum_{k\in\Z} \inn{f}{\varphi_k}_{L^2(I)}\, L_a^\sigma (\cdot - k), \quad f\in L^2(I).
\]
\item[(ii)] Every function $f\in \mcH \cong L^2(I)$ can be recovered from its samples on the integers via
\begin{equation}
f = \sum_{k\in \Z} f(k) L_a^\sigma (\cdot - k),
\label{eq Kramer Abtastreihe}
\end{equation}
where the above series converges absolutely and uniformly on all subsets of $\R$.
\end{enumerate}
\end{theorem}
\begin{proof}
Conditions C1. and C2. for $S_{k }= L_{a}^\sigma(\cdot -k)$, $k\in\Z$, in Theorem \ref{gensamp} are readily verified. Since the unfiltered splines $\{E_a^\sigma (\cdot - k): k\in \Z\}$ already form a Riesz basis of the $L^2$-closure of their span \cite{m14}, $\|K(\cdot, t)\|_{L^2(I)}$ is bounded on $\R$.
\end{proof}
Finally, we consider two examples illustrating the above theorem. These examples can also be found in \cite{FM} in case one deals with cardinal polynomial B-splines of fractional order.
\begin{example}
Consider $L^2[\,0,2\pi\,]$ with orthonormal basis $\{\exp(ik\,(\cdot))\}_{k\in\Z}$. Then
$$
K(x,t) = \sum_{k\in\Z} L_{a}^\sigma(t-k) \exp(-ikx)
$$
and
\begin{eqnarray*}
\mcK f(t) & = & \int_{0}^{2\pi} f(x) \sum_{k\in\Z} L_{a}^\sigma(t-k) \exp(-ikx)\, dx
\\
& = & \sum_{k\in\Z} \int_{0}^{2 \pi} f(x) \exp(-ikx)\, dx \, L_{a}^\sigma(t-k)
\\
& =& 2 \pi \sum_{k\in\Z} \widehat{f}(k) L_{a}^\sigma(t-k).
\end{eqnarray*}
This equation holds in $L^2$-norm and we applied the Lebesgue dominated convergence theorem. Thus, $\mcK f$ interpolates the sequence of Fourier coefficients $\{ \widehat{f}(k)\}_{k\in\Z}$ on $\R$ with shifts of the fundamental cardinal spline $L_{a}^\sigma$ of real order $\sigma$.
Moreover, if $f\in \mcH = \mcK(L^2[\,0,2\pi\,])\cong L^2[\,0,2\pi\,]$, then, by Theorem \ref{S Abtastsatz}, $f$ can be reconstructed from its samples by the similar series
$$
f = \sum_{k\in\Z} f(k) L_{a}^\sigma(\cdot -k),
$$
which converges absolutely and uniformly on all subsets of $\R$.
\end{example}
\begin{example}
Consider $L^2(\R)$ endowed with the (orthonormal) Hermite basis defined by
$$
\varphi_{k}(x) = \frac{(-1)^k}{k!} \exp\left(\frac{x^2}{2}\right) \left(\frac{d}{dx}\right)^k \exp(-x^2), \quad k\in\N_{0}.
$$
Then
$$
K(x,t) = \sum_{k\in\Z} L_{z}(t-k) \varphi_{p(k)}(x),
$$
where $p: \N_{0}\to \Z$ maps the natural numbers bijectively to the integers.
An application of the Lebesgue dominated convergence theorem yields
\begin{eqnarray*}
\mcK f (t) & = & \int_{\R} f(x) \sum_{k\in\Z} L_{z}(t-k) \varphi_{p(k)}(x) \, dx
\\
& = & \sum_{k\in\Z} \int_{\R} f(x) \varphi_{p(k)}(x)\, dx\, L_{z}(t-k).
\end{eqnarray*}
The integral represents the coefficients of $f$ in the orthonormal basis $\{\varphi_{k}\}_{k\in\N_{0}}$.
Again by Theorem \ref{S Abtastsatz}, all functions $f \in \mcH = \mcK(L^2(\R))\cong L^2(\R)$ can be reconstructed from its samples at the integers via the series (\ref{eq Kramer Abtastreihe}).
\end{example}
\bibliographystyle{plain}
\bibliography{Interpolation_and_sampling}
\end{document} | {"config": "arxiv", "file": "2009.10384/Interpolation_and_sampling.tex"} |
TITLE: Intuition about how Voronoi formulas change lengths of sums
QUESTION [6 upvotes]: In reading the literature one encounters countless examples of Voronoi formulas, i.e., formulas that take a sum over Fourier coefficients, twisted by some character, and controlled by some suitable test function, and spits out a different sum over the same Fourier coefficients, twisted by some different characters, and this time controlled by some integral transform of the test function.
The reason one wants to do this in practice is that the second sum is somehow better, of course, which in my (admittedly limited) experience tends to boil down to the length of the second sum having changed significantly to the better.
I'll give an example (from Xiaoqing Li's Bounds for GL(3)×GL(2) L-functions and GL(3) L-functions, because it is what I happen to have in front of me).
In this case we have the GL(3) Voronoi formula
$$
\sum_{n > 0} A(m, n) e\Bigl( \frac{n \bar d}{c} \Bigr) \psi(n) \sim \sum_{n_1 \mid c m} \sum_{n_2 > 0} \frac{A(n_2, n_1)}{n_1 n_2} S(m d, n_2; m c n_1^{-1}) \Psi \Bigl( \frac{n_2 n_1^2}{c^3 m} \Bigr),
$$
where $\psi$ is some smooth, compactly supported test function, $\Psi$ as suggested above is some integral transform of it, $A(m, n)$ are Fourier coefficients of (in this case) an SL(3) Maass form, $(d, c) = 1$, and $d \bar d \equiv 1 \pmod{c}$.
(I've omitted lots of details here, but the details, I think, aren't relevant to my question.)
Doing so essentially transforms the $n$-sum into the $n_2$-sum, where, as is evident in the formula, the $n_2$-sum has a very different argument in its test function.
What happens in practice now is that, once we get to a point where applying the Voronoi formula is appropriate, we transform the sum and study the integral transform, chiefly by means of stationary phase analysis in order to find what length of the new $n_2$-sum is.
In the particular example at hand, this, after identifying the stationary phase and playing along, this takes us from an $n$-sum on $N \leq m^2 n \leq 2 N$, i.e., $n \sim \frac{N}{m^2}$, to an $n_2$-sum on
$$
\frac{2}{3} \frac{N^{1/2}}{n_1^2} \leq n_2 \leq 2 \frac{N^{1/2}}{n_1^2},
$$
i.e., $n_2 \sim \frac{N^{1/2}}{n_1^2}$, which then means that the arguments in the test function are now of size $\frac{N^{1/2}}{c^3 m}$.
I can go through the motions of performing this stationary phase analysis and so on, but my question is this: is there any intuition to be had about how and in what way these Voronoi formulas alter the lengths of sums?
REPLY [4 votes]: First of all, the description of $\psi$ after the first display is confusing (assuming OP meant $\psi$ is supported around $N$, otherwise conclusion form the first display does not make sense). I went to the relevant part (end of p.318) of Li's paper and found that $\psi$ is not just any test function, it has a weight of $n^{-3/4}$, and, more importantly, it has oscillation. A necessary display of the LHS would be (taking $m=d=1$)
$$\sum_{n}A(1,n)e\left(\frac{n}{c}+2\sqrt{n}-\frac{1}{\sqrt{n}}\right)\psi(n/N),$$
where $\psi$ is a test function supported on $[1,2]$.
To have an intuition about what the length of the dual sum would be one can follow the general heuristic formula (HF) below.
$$\text{length of the dual sum} = \frac{\text{total conductor}}{\text{length of the original sum}}.$$
Here total conductor is the conductor of the oscillating object taken all the twisting into account. For example, the total conductor of $L(1/2+it,\pi'\otimes\pi)$ where $\pi$ and $\pi'$ are fixed $\mathrm{GL}(n)$ and $\mathrm{GL}(m)$ automorphic representations, is $t^{nm}$.
Conductor of $e_q(x):=e(x/q)$ is $q$. In this case the denominator in the entry of $e()$ is of size $c\sqrt{N}$. But there is twist by a $\mathrm{GL}(3)$ Hecke eigenvalue. So the total conductor is $c^3N^{3/2}$. Applying the above formula one obtains the length of the dual sum.
One can check that the above HF also occurs in the formula of the approximate functional equation of the central $L$-value. There are two sums in the formula corresponding to the representation and its contragredient. One can check that the product of the length of the sums equals to the conductor of the $L$-function.
(The main reason of this HF to work is the automorphy under some suitable Weyl element of the underlying automorphic form, which is, indeed, the key ingredient to prove the approximate functional equation and Voronoi formula.) | {"set_name": "stack_exchange", "score": 6, "question_id": 352940} |